Soapbox
Regulating AI-Based Medical Devices: A Moving Target?

Artificial intelligence (AI) and machine learning (ML) technologies have the potential to dramatically improve patient care. Technologies that use software as a medical device (SaMD) are part of a growing industry. There are so many potential applications that could improve quality of life and patient outcomes—it’s hard to just list a few. This year, a company called StethoMe developed an AI-assisted stethoscope that patients can use at home. The device records a patient’s heartbeat, while the AI decodes the audio, which is sent to a physician who can then interpret the sound as if they had been there in person. In April 2019, at Boston Children’s Hospital, bioengineers reported the first demonstration of a robot able to navigate autonomously inside the body to help repair a leaky heart valve, without a surgeon’s assistance.

Many AI and machine learning-based devices continuously learn and change as they absorb new data. This is uncharted territory for organizations that regulate medical devices, like the FDA. In the past, the FDA could count on devices remaining the same, and require a 510(k) if a device was modified. This process poses a major challenge, however, as it doesn’t necessarily translate well to devices that are changing possibly dozens, if not hundreds, of times daily with each new data point. Nevertheless, the FDA is no stranger to change. Since it was founded in 1906, the laws covering the regulation of drugs and medical devices have had to change as new technologies came into existence. Our move into the future of medical devices based on AI/ML is no different.

In Spring 2019, the FDA released a proposed regulatory framework for AI/ML-based SaMD. This is the first step in an ongoing discussion with industry stakeholders on what the regulations should be. The procedures by which SaMD will comply with regulations vary as far as the types of changes involved with a given device. Premarket submission to the FDA would be required when a change significantly affects device performance, or safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the SaMD algorithm. To determine the guidelines, the FDA had to find balance between the function of AI/ML for SaMD while maintaining a reasonable assurance of safety. To date, the FDA has mostly approved devices whose algorithms are “locked” prior to distribution. In other words, a given set of inputs to the device would yield the same set of outputs after distribution as they would before distribution. However, many future SaMDs will have algorithms that will change and learn even after distribution.

In addition, to translate existing risk categories over to AI/ML-based SaMD, the FDA will likely look at a combination of the seriousness of the medical condition addressed and the significance of the information provided by the AI/ML-based SaMD. At the most strictly regulated end of the spectrum would be a category IV, which would treat or diagnose a critical health condition; for example, an app that uses AI to identify melanoma, a serious form of skin cancer. At the opposite end would be a device that informs clinical management of a non-serious condition, such as an app that monitors the healing of a scar.

State of healthcare situation or condition Significance of information provided by SaMD to healthcare decision
Treat or diagnose Drive clinical management Inform clinical management
Critical IV III II
Serious III II I
Non-serious II I I
Source: Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)- Based Software as a Medical Device (SaMD), FDA

Lower risk devices will likely be able to get by without jumping through too many hoops, so long as they have well-defined and documented specifications of “inputs” (SaMD Pre-Specifications, or SPS) and algorithm change protocols (ACP) to control the risks associated with anticipated modifications. Components of ACPs include procedure delineation for data management, re-training, performance evaluation and update procedures. Manufacturers of higher risk devices should anticipate the need for more documentation, approvals and general oversight.

All of this is just the beginning. As recently as June 2019, the American Medical Informatics Association responded with suggestions for fine-tuning the regulations. In addition to cybersecurity concerns, quality of training data and bias resulting from mismatches between patients and training populations stood out among the criticisms. The association recommended that the FDA include language in the framework that could protect against algorithm bias impacting persons of “particular ethnicities, genders, ages, socioeconomic backgrounds, and physical and cognitive abilities”. This dialogue between stakeholders may slow down or delay approvals for some highly anticipated technologies, but it’s necessary to ensure the best possible outcomes for all involved.

About The Author

Exit mobile version