Notes from the AI in Medicine Workshop

April 26, 2024
 · 
8 min read

Over the course of three days in April, I attended an AI in Medicine Workshop aimed at medical students seeking early exposure to AI. An expert panel of talks and mini-sessions was assembled and while limited to active medical students I was able to join the sessions virtually. The workshop was hosted by Jen Ren, Katie Link, and Joy Jiang, medical students at the Icahn School of Medicine at Mount Sinai.

Friday

Intro to NLP and LLMs (Anish Kumar, Medical Student at Mount Sinai and Clinical Informatics Fellow at Rubicon MD): Explored the history and applications of Natural Language Processing (NLP), differentiating it from chatbots.

  • "NLP is a field that's natural language processing, if you didn't know. It's become extremely visible as in the past sort of two, three years among the wider public because of chat GPT and the sort of burgeoning use of LLMs in a lot of different fields."
  • "Aspects of NLP are intimately tied to some of the foundations of computation and information processing. A lot of the early computer scientists were pondering some of these questions about how we really look at language and information and represent it in a way that a machine can understand."
  • "Will we replace human therapists? This idea has become highly visible in the past couple years, but it's actually something that people have thought about for decades."

Intro to ML (Corbin Matthews, Lead Machine Learning Engineer at Booz Allen Hamilton & Joy Jiang, MD/PhD Student at Mount Sinai): Defined AI, Machine Learning, and Deep Learning, differentiating their scopes. Explained how deep learning algorithms learn through forward and back propagation, loss functions, and optimizers.

  • "Whether in the ICU or whether following a patient longitudinally, reinforcement is super powerful because it can allow for personalized and adaptive health care interventions in complex and dynamic settings."

Intro to end-to-end development process (Girish Nadkarni, System Chief of the Division of Data Driven and Digital Medicine (D3M) at Mount Sinai): Provided a high-level overview of the AI development process, emphasizing the transition from hope and hype to cynicism and fear. Addressed challenges in AI adoption, including model quality, evidence, and transparency.

  • "So AI in medicine right now is I think this tipping block between hope and hype, because a lot of people I think that will change medicine."
  • "Do we need a human in the loop for all scenarios? I'm going to make a controversial argument. No, that we don't need a human in the loop for all scenarios."

Low-code / no-code exercises (Vivian Utti, Medical Student at Mount Sinai): Introduced no-code/low-code AI platforms as a way to democratize access to machine learning. Demonstrated using Google's Vertex AI platform for sentiment analysis on a "happy moments" dataset and a bank deposit prediction task.

  • "The point I stress was, I think machine learning has become, you know, blown up particularly in the last decade or two. And it's really, really hard to learn if you maybe aren't someone who comes from a type of background, just the terminology, again, the way of thinking. And I think that no cold machine learning is really important because it, you know, essentially makes ML more accessible..."

Saturday

Past and present medical AI overview (Katie Link, Medical Student at Mount Sinai): Revisited fundamental AI concepts and delved into the current state of AI in medicine. Focused on generative AI, especially large language models (LLMs), discussing their architecture (transformers), training process (supervised learning, self-supervised learning, reinforcement learning), applications, and potential pitfalls. Emphasized the importance of responsible AI principles, including safety, transparency, and privacy.

  • "Healthcare access is not equitable. And some AI techniques could be suited to medical data and tasks."
  • "So why are we at this moment where all LLM's are big deal? So basically, if you had to describe it into words, I would say it would be like scaling these transformers."

Translating healthcare problems to ML experiments (Akhil Vaid, Assistant Professor, Icahn School of Medicine at Mount Sinai): Explained the process of translating clinical questions into machine learning experiments, focusing on the relationship between inputs (data), model, and outputs (desired prediction). Covered supervised learning (classification, regression), unsupervised learning, and reinforcement learning, providing examples of how they are applied in medical research.

  • "Consequential decisions we make in healthcare increasingly rely on data we observe. But the data we observe can be difficult to trust."
  • "Essentially, what neural networks are doing, and this comes back to where the supervised learning thing was talked about, is that there are representations inside the model that we're jiggling around a little bit, until they fit the exact idea or shape that you want."

Case study discussion: Small-group discussions on provided case studies, analyzing how AI can be applied to different clinical problems.

Bias in machine learning (Lili Chan, Associate Professor of Medicine, Division of Nephrology, Icahn School of Medicine at Mount Sinai): Addressed the issue of bias in machine learning, focusing on stigmatizing language in electronic health records. Discussed NLP techniques for identifying such language and its potential impact on patient care, particularly in kidney transplantation. Presented preliminary work on using sentiment analysis to identify potentially biased language and discussed interventions to mitigate provider bias.

  • "Stigmatizing language is the use of negative labels that's stereotypes and judgments to certain group of patients or of people."
  • "So with that context, this is an AI talk. So why is this coming up in AI? One thing is that in order to really modify or change our behavior, we have to actually see how are we ourselves doing within our medical system."

Measuring model performance (Nishith Khandwala, Clinical Data Scientist at Flatiron Health): Explained how to evaluate the performance of machine learning models, focusing on key metrics like AUC, precision, recall, and calibration. Emphasized the importance of understanding what each metric represents and how to interpret them in a clinical context.

  • "Calibration. Like I said, you get the probabilities from the model. But these probabilities are actually not representative of anything that's going to happen in the real world."

Model development to address bias in AI (Faris Gulamali, MD/PhD Candidate at Mount Sinai): Explored methods to address bias in AI models, particularly focusing on developing models that are robust to missing or incomplete data. Discussed techniques like data augmentation and transfer learning, highlighting their potential to improve model fairness and generalizability.

  • "So for example, we have a very simple model where we take in genotype data and we try to predict a particular phenotype and what we found is that if you limit the model's access to data from some fraction of the population, the model can still do quite well on the full population."

Case study discussion: Small-group discussions on applying bias mitigation techniques to the provided case studies.

Medical data for ML (Ali Soroush, Assistant Professor of Medicine, Icahn School of Medicine at Mount Sinai): Deep dive into the challenges and considerations of working with medical data for machine learning, including data quality, quantity, modalities, and formats. Covered data wrangling techniques like data cleaning, standardization, dimensionality reduction, and merging data sets.

  • "The main thing that is facilitating AI in healthcare, I think, is this explosion of data that's come out. A lot of the routine clinical care that's happening is capturing data, multiple types of data, and every year that data expands, the types of data expand, the quality of the data improves or decreases depending on what your perspective is."
  • "Learn how to use GitHub and Git because it will make your life easier."

Case study discussion: Final small-group discussions applying data wrangling techniques to the provided case studies.

Sunday

Responsible AI for healthcare (Divya Shanmugam, PhD Candidate at Cornell Tech): Focused on the challenges of hidden influences in healthcare data and their consequences for machine learning models. Discussed examples of flawed data, like missingness, measurement error, and sampling bias, and their impact on model behavior. Presented Privacy Protecting Record Linkage (PPRL), a method for estimating health disparities corrected for underreporting, highlighting the importance of identifying and addressing data flaws.

  • "The applications of machine learning, particularly in healthcare, must contend with differences between the data we collect and the data we wish to observe."
  • "We might hope to collect data that represents all demographics, but we might see a sample of patients shaped by different levels of health access, which leads to data sets that are limited in terms of representation and limited in terms of the number of absolute examples we see in populations you might care about."

Medical AI research and policy (Felix Richter, Resident Physician at Mount Sinai): Presented a case study on using computer vision for early detection of neurological changes in neonates, outlining the process of building an AI project from scratch. Emphasized the importance of identifying a clinically relevant problem, selecting appropriate data and methods, and considering ethical implications.

  • "The most important thing to start with is you have to think about what questions are important. Where are the gaps? What are the I don't know how many people here are working clinically. Where are the automation needs? What are the frustrations? Where is the SCUT work? Where is the like where where do you think AI has a role?"

Evaluating AI in clinical environments (Pierre Elias, Medical Student at Mount Sinai): Discussed the challenges and considerations of evaluating AI models in real-world clinical settings. Covered topics like model validation, monitoring performance over time, and addressing issues of bias and generalizability.

  • "The models were evaluated on their ability to classify skin lesions as benign or malignant. They were able to classify, the AI algorithms were able to classify skin lesions as benign or malignant with an area under the curve of 0.95 or greater. This is actually similar to the area under the curve of dermatologists."

Roles for clinicians in AI (Katie Link & Vivian Utti, Medical Students at Mount Sinai): Summarized the diverse roles clinicians can play in the development and implementation of AI in medicine, from informed users to collaborators, educators, advocates, and innovators.

  • "Obviously, you can just be an informed clinician, a human-informed user of these AI tools and have a super, super important role in the effective and safe implementation of AI medicine."

The AI in Medicine Workshop offered a rich array of sessions, including introductions to natural language processing, machine learning, and AI development processes. Experts provided practical guidance on translating clinical questions into machine learning experiments, tackling bias in AI, and managing medical data for AI applications. Discussions also covered responsible AI principles, real-world model evaluation, and the diverse roles clinicians can play in AI implementation. The workshop underscored the transformative potential of AI in healthcare, emphasizing the importance of ethical considerations, transparency, and addressing biases to ensure equitable and effective AI solutions.

SCTY is a Minority Business Enterprise (MBE) incubator, advisory and consultancy based in NYC. Founded by Ali Madad, SCTY engages in works for culture, commerce and public policy.

© 2024 SCTY, Inc. All Rights Reserved.