In this episode, we talk a bit about the recent advances in large language models, also known as GPT/ChatGPT. We have two wonderful guests:
Christoph U. Lehmann, M.D., is a Professor of Pediatrics, Population and Data Sciences, and Bioinformatics at UT Southwestern, where he directs the Clinical Informatics Center. In addition, Chris was the first chair of the Examination Committee of the American Board of Preventive Medicine, Subcommittee for Clinical Informatics. Dr. Lehmann’s research focuses on improving clinical information technology and clinical decision support.
Yaa Kumah-Crystal Crystal MD, MPH, MS is an Assistant Professor of Biomedical Informatics and Pediatric Endocrinology at Vanderbilt University Medical Center. Yaa’s research focuses on studying communication and documentation in healthcare and developing strategies to improve workflow and patient care delivery. Yaa works in the Innovations Portfolio at Vanderbilt HealthIT on the development of Voice Assistant Technologies to improve the usability of the EHR through natural language communication.
Chris and Yaa bring very complementary perspectives to the topic of our future. Yaa's research focuses on how we can innovate to improve the use of technology in medicine. Chris is also internationally known as the Editor in Chief of Applied Clinical Informatics,
as well as one of the leaders in our clinical informatics board certification work. He is intimately familiar with the potential uses of this technology beyond clinical care, but, as an actively practicing neonatologist, more than holds his own when it comes to how medicine can benefit from--or be harmed by--new technologies such as AI.
We leave it to you to decide both which direction we're heading, and how we can put up the guardrails to keep us on the preferred track. And, I suspect this won't be our last discussion about AI in Medicine!----more----
By the way, in case you want to learn more about topics we brought up in this episode:
Belmont principles include autonomy
- Beneficence: AI is designed explicitly to be helpful to people, who use it or on whom it is used, and to reflect the ideals of compassionate, kind, and considerate human behavior
- Autonomy: Context AI: operates without human oversight. Context Ethics: “protecting the autonomy of all people and treating them with courtesy and respect and facilitating informed consent”
- Nonmaleficence: “Do No Harm”. Every reasonable effort shall be made to avoid, prevent, and minimize harm or damage to any stakeholder.
- Justice: Equity in representation in and access to AI, data, and the benefits of AI. Fair access to redress and remedy be available in the event of harm resulting from the use of AI. Affirmative use of AI to support social justice
Artists and AI:
TikTok voiceover person: https://www.theverge.com/2021/9/29/22701167/bev-standing-tiktok-lawsuit-settles-text-to-speech-voice
GPT and test performance: https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html, https://www.medrxiv.org/content/10.1101/2023.03.05.23286533v1.full
Deepfake concerns:
MidJourney and bias:
Amazon AI Tool Bias: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Apple credit biased against wives: https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/
AMIA document about ethical principles around AI: https://amia.org/news-publications/amia-position-paper-details-policy-framework-aiml-driven-decision-support
AI in Medicine JAMA Viewpoint: https://pubmed.ncbi.nlm.nih.gov/36972068/
Sophia: https://en.wikipedia.org/wiki/Sophia_(robot)