
Pathologist Calls for Ethical Oversight in Healthcare AI Development and Use
TL/DR –
Dr. Takunda Matose, a research ethicist, has advised clinicians and developers of healthcare artificial intelligence (AI) systems to reconsider the assumption that larger data sets automatically yield better results. Despite the healthcare industry generating between 14% and 30% of global data, only a small fraction is meaningfully analyzed or used, and approximately 90% of healthcare data goes unused. Dr. Matose also highlighted the inherent biases in AI systems, and the need to balance the strengths of AI and human judgment, while maintaining an “ethical lens” that considers collective benefits, institutional policies, privacy, and security.
Understanding AI Systems in Healthcare: A Closer Look at Data Volume, Ethics, and Accuracy
In a recent lecture at the Association for Molecular Pathology’s annual meeting, Dr. Takunda Matose, a research ethicist and assistant professor of pediatrics at the University of Cincinnati Medical Center, urged pathologists to question the belief that larger data sets yield better performance in artificial intelligence (AI) systems. Matose emphasized the need for careful oversight in the development and use of AI tools aimed at improving patient care.
His lecture, “Ethical Frontiers: The Promise and Perils of Healthcare AI in a Socially Connected World,” highlighted the need for clinicians and developers to reassess the relationship between data volume, algorithmic accuracy, and responsible use.

Matose called into question the common assumption that more genomic, imaging, and clinical data automatically improve AI models. He pointed out that while the health care sector generates an estimated 14%-30% of all global data, only a fraction of it is analyzed or used effectively.
Challenges in AI for Healthcare
Notably, about 90% of health care data goes unused, and much of the captured data is not designed with model development in mind. Matose suggested viewing AI as a tool for performing complex conditional probability calculations that change based on contextual variables, new inputs, and the steps taken in data annotation and pipeline choice.
He emphasized that biases are inherent in AI systems due to the decision-making processes involved in their design and data selection. Furthermore, even if an AI system could be designed without bias, use of the system could introduce new biases, potentially leading to errors in reasoning.
Matose also highlighted the tendency to misinterpret accuracy metrics provided by vendors or researchers. While AI’s probabilistic capabilities can greatly enhance clinical workflows, care should be taken not to see these as infallible or as substitutes for human judgment.
A Shift to the Ethical Perspective
Matose encouraged a shift towards an “ethical lens” approach to AI, which considers factors such as collective benefits and burdens, institutional policies, and obligations to patients and other stakeholders. He also stressed the importance of managing health care data responsibly, treating them as “social facts” rather than objective truths, and considering implications for privacy and security.
Additionally, he discussed the complexities of operationalizing principles like FAIR (Findable, Accessible, Interoperable, Reusable) in a rapidly evolving technological landscape and underscored the importance of understanding AI as a tool for probabilistic reasoning – inherently limited but potentially very useful in certain contexts.
Matose concluded by advocating for thoughtful decision-making over simply placing humans “in the loop” of AI processes. Applying an ethical lens, he suggested, can help determine when smaller data sets are sufficient and foster a balanced approach to using AI in healthcare.
—
Read More Health & Wellness News ; US News