Tuesday, February 25, 2025

Software as a medical device: Time to develop an ethical framework?


By Dr. Tim Sandle
February 24, 2025
DIGITAL JOURNAL


Generative artificial intelligence powered features such as chatting about what is in pictures, telling children bedtime stories, and imitating podcasters continue to roll out despite fears the technology will be used for more nefarious purposes 
- Copyright AFP Yasuyoshi CHIBA

Medical based AI is expanding, especially the concept of “software as a medical device”, yet regulatory approval is slowing and public acceptance is not significantly growing. What needs to be done to address opaque algorithms in medical AI? The answer may fall within the development of a universal framework based on an ethical structure. Through such a structure, developers, healthcare professionals, and legislators can become better ‘sensitized’ to the needs of the general population.

A new article, from US based medical researchers, has probed the use of artificial intelligence-based software in relation to medical devices. Such devices present the possibility for alleviating suffering through rapid identification and early intervention.

Yet the adoption of such devices in clinical practice has remained relatively slow. The limitation is not so much to do with the technology but more in relation to ethical questions.

While ethical questions will have some cultural differences, and there is an absence of any universal framework for the approval of AI-assisted medical devices, it is noticeable that the guiding principles remain very similar globally. However, these are often implemented in a haphazard way.

The article calls for a structured approach for the regulatory approval process. This is based around key principles of medical ethics: autonomy, beneficence, and fair distribution of healthcare sources.

Autonomy


Autonomy concerns the importance of informed consent, self-determination, and the right to refuse or accept treatment. In other words, the patient must maintain full control over the decision-making process about their health.

In terms of AI, different national legislation shapes whether or not patients retain data ownership, and the extent that users can decide how their data can be used by a healthcare facility or company

.
Medical device. Image by Orangeboxes2 – Own work, CC0

Beneficence


Beneficence obliges the physician to act only for the benefit of the patient and avoid anything that could oppose the patient’s well-being. This needs to run in tandem with non-maleficence, the rules that prevent physicians from harming patients in any capacity in any way.

In terms of AI, this means ensuring that AI-based devices lead to timely intervention and preventive measures.

This means avoiding AI algorithms being trained using biased datasets. The risk otherwise is that AI can perpetuate and amplify existing biases, leading to discriminatory and unfair outcomes.

Fair distribution

Fair distribution is part of the concept of ‘justice’ and this includes having appropriate measures in place to ensure that no implicit bias arises from the use of AI-based devices and that unfair discrimination is eliminated during the development process.

Explainability

An important area is with building public trust for AI. Here the paper calls for “explainability and transparency of AI algorithms” as “the characteristics that are crucial to ensuring the trust and accountability of these systems.” In other words, if the public do not understand what an Ai algorithm actually does and cannot see how their data is being handled, then the public acceptance of the AI and a willingness to share data or to participate in a trial is diminished.

Explainability is not a purely technological issue, and it invokes a host of medical, legal, ethical, and societal questions

In terms of a suitable outcome, the paper recommends regulating quality management, risk assessment, and data privacy to help in building trust to promote the adoption of AI in healthcare.

The research appears in the journal Cureus, titled “Integrating Ethical Principles Into the Regulation of AI-Driven Medical Software.”



Written By Dr. Tim Sandle

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.


No comments: