
Technology is only as ethical as the hands, and hearts, that guide it.
Artificial intelligence (AI) is rapidly transforming society but this is especially being felt in healthcare — enhancing diagnosis, streamlining workflows, and changing how professionals teach and learn. With these changes bring new challenges: How do we use AI responsibly? What does professionalism look like in an AI-enabled world? And how do we ensure that technology serves people — not the other way around?
Backed by rigorous research, the Health CARE-AI Framework (Contextual, Accountable and Responsible Ethics for Artificial Intelligence in Healthcare) was developed to answer those questions. It provides ethical guidance for using AI in ways that are safe, equitable, transparent, and professional across all stages of health professions education and practice.
Health CARE-AI Framework
Our Research Process
Health CARE-AI Framework
What Makes Health CARE-AI Different?
The result is a research-backed framework that bridges the gap between policy and practice. It offers a common language to help professionals, educators, and institutions work together to shape an AI-integrated healthcare system that is ethical, inclusive, and trustworthy.
Health CARE-AI Framework
Health CARE-AI Principles
The Health CARE-AI framework contains 9 professionalism principles with a focus on challenges posed by AI technologies. These principles apply to individuals across all career stages, from training to practice, emphasizing critical thinking, ethical behaviour, and continual learning as AI technologies continually evolve.
Build and maintain AI literacy.
Responsible AI use should complement, not replace, human judgment.
Think critically before speaking, acting, or uploading with an AI system present.
Use AI with honesty and integrity.
Work within the law.
Use and share information ethically in AI-supported environments.
Use AI in ways that actively reduce bias and promote equity.
Build equity into AI foundations.
Past Events
Health CARE-AI Framework
Funding
Funding for this work is provided in part by:




