The use of artificial intelligence (AI) in life sciences, or “Life Tech”, has increased at a rapid pace. According to World Intellectual Property Organization (WIPO), there has been “a shift from theoretical research to the use of AI technologies in commercial products and services,” as reflected in the change in ratio of scientific papers to patent applications over the past decade.1 Indeed, while research into AI began in earnest in the 1950s, more than 1.6 million scientific papers have been published on AI, with more than half of identified AI inventions in the last six years alone.2,3 A review article in Nature Medicine reported last year that despite few peer-reviewed publications on use of machine learning technologies in medical devices, FDA approvals of AI as medical devices have been accelerating.4 Many of these FDA approvals relate to image analysis for diagnostic purposes, such as QuantX, the first AI platform to evaluate breast abnormalities; Aidoc, which detects acute intracranial hemorrhages in head CT scans, assisting radiologists to prioritize patient injuries; and IDx-DR, which analyzes retinal images to detect diabetic retinopathy. Meanwhile, the use of AI in other, non-imaging-related medical fields is also actively being developed, including applications for diagnosing autism, selecting embryos for in vitro fertilization, and predicting suicide and psychotic episodes.4 In short, new discoveries in life sciences and, in particular, in the use of diagnostics, medical devices and drug delivery platforms are being fueled by discoveries in the use of AI for such purposes.
With the rapid growth of life tech discoveries, there is a need to adapt the patent and regulatory frameworks governing the approval, use, and protection of such discoveries. Congress and administrative agencies, such as the United States Patent & Trademark Office (PTO) and the FDA, are working to clarify the law and regulations impacting the market for AI technologies. Some of the major issues they are encountering relate to patent rights or regulatory oversight:
- In light of judicial bans on patents directed to purely “abstract ideas,” how should claims to AI algorithms be classified?
- How can a patent specification be drafted to enable or provide adequate written description for a technology expressly designed to evolve?
- Can patents be granted on technological innovations crafted solely through machine learning and without human involvement?
- How do we ensure the safety and efficacy of algorithms designed to evolve without human input or supervision?
This article highlights these key questions, which are no longer purely academic exercises, but real challenges faced in today’s evolving world of AI innovation.
What is AI?
In a whitepaper published in April 2019, the FDA adopted John McCarthy’s definition of AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.”3 The FDA approves the marketing of certain computer programs as medical devices, or SaMDs (software as a medical device), which the International Medical Device Regulators Forum (IMDRF) defines as software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device. Some of these SaMDs can be categorized as AI, and the FDA has established a faster, more flexible regulatory review program for such AI-based SaMDs.4
AI-based SaMDs can be further subcategorized as either “locked” or “adaptive” algorithms. FDA defines “locked” algorithms as those that “provide [] the same result each time the same input is applied to it and does not change with use,” such as “static look-up tables, decision trees, and complex classifiers.”5 The FDA notes that the AI devices approved for marketing to date have typically been such locked algorithms. In contrast, “adaptive” or “continuously learning” algorithms are those that can change behavior using a defined learning process.6 For such algorithms, the same given set of inputs may result in a different output, depending on what the algorithm learns as more data are collected.
Adaptive algorithms that undergo machine learning (ML)—i.e., algorithms that can learnfrom the inputs provided—are being pursued for medical applications. In particular, with respect to SaMDs, clinicians will be using technology that incorporates deep learning, which involves deep neural networks (DNNs) that interpret images and other inputs. These inputs are then processed through multiple hidden layers of connected so-called “neurons” that ultimately provide an output.4 The critical feature that raises special challenges for patenting and FDA regulation is the fact that these neurons are not designed by humans; the number of layers, which are hidden, is determined by the data itself. Thus, such DNNs are continuously adaptive, which means that their hidden layers may change after the initial algorithm is developed. However, precisely because these hidden layers are driven by the DNN and not designed by humans, DNN algorithms are considered a “black box”7: It’s not immediately apparent, and may not even be possible, for a human to ascertain how a DNN gets from input to output. And not knowing how a DNN gets from input to output means it’s harder to know, at the outset, whether a DNN will get it right.
Challenges to Patenting AI/ML-Based SaMDs
Given increasing interest in promoting AI research and commercializing AI applications, an immediate problem to address is how the scope of patent protection for AI can be redefined to promote and regulate the introduction of AI technology into the marketplace. The following section covers the intersection of AI research with the grant of U.S. patents, especially with respect to patent subject matter eligibility.
A. Subject matter eligibility
The language in 35 U.S.C. §101 might be read as permissive: “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.” However, the courts have historically interpreted §101 as creating a bar to patenting certain categories of subject matter. To overcome this bar, patent claims must pass the two-part Mayo/Alice test, which asks:
- Is the claim directed to a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or an abstract idea?
- If so, do the elements of each claim, both individually and “as an ordered combination,” add an inventive concept, such that the nature of the claim is transformed into a patent-eligible application?
See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2355 (2014).
Many software patent claims have been considered directed to “abstract ideas” under the “mental steps” doctrine, in which claims that recite methods that “can be performed in the human mind, or by a human using a pen and paper” are considered patent-ineligible. See, e.g., CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1373 (Fed. Cir. 2011).8 The Federal Circuit, however, has cautioned that certain software programs can still “make non-abstract improvements to computer technology,” and thus may still pass step two of the Mayo/Alice test. Enfish, LLC v Microsoft Corp., 822 F.3d 1327, 1335 (Fed. Cir. 2016).
The PTO adopted a particularly AI-friendly approach to patentability and issued a guidance last year that specifically addressed subject matter eligibility in relation to AI patent claims.9 This guidance provides clarification regarding PTO examination procedures under the judicially created Mayo/Alice two-step test: First, the guidance identifies particular categories of claims that could be considered directed to “abstract ideas,” and second, the guidance instructs that a claim will not be considered “directed to” a judicial exception unless it recites a judicial exception that is not integrated into a practical application.
The PTO further expressly addressed DNNs in the examples on abstract ideas that accompany the new guidance. Example 39 sets forth a hypothetical fact pattern relating to claims directed to a computer-implemented method of training a neural network for facial detection. The PTO explained that while the hypothetical claim formally recites a process, it does not fall under a judicial exception to eligible subject matter. According to the PTO, the claim “does not recite a mental process because the steps are not practically performed in the human mind.” This guidance has already changed the analysis of § 101 during prosecution: examiners have withdrawn § 101 rejections for patent claims directed to neural networks in light of this guidance.10
Congress has sought to stunt the power of § 101 to bar the eligibility of inventions for patenting. On May 22, 2019, Republican and Democratic senators and House representatives released a bipartisan, bicameral draft bill that aimed to reform 35 U.S.C. § 101. According to the text of the proposed bill, “The provisions of section 101 shall be construed in favor of eligibility. No implicit or other judicially created exceptions to subject matter eligibility, including ‘abstract ideas,’ ‘laws of nature,’ or ‘natural phenomena,’ shall be used to determine patent eligibility under section 101, and all cases establishing or interpreting those exceptions to eligibility are hereby abrogated” (emphasis added). However, as of this writing, no further action has been taken on this proposed legislation.
In summary, while courts may still find patents directed to AI algorithms susceptible to challenge under the abstract idea judicial exception to patentability, there has been increased interest by the U.S. government to promote patenting of AI-related inventions. It’s very possible that in the future, subject matter eligibility will not present a bar to AI patenting at all.
B. Other potential patent considerations
Aside from subject matter eligibility, AI raises challenges to the current analysis of patent law that are not easy to resolve. For example, for a patent to be valid under judicial interpretation of current 35 U.S.C. § 112(a), it must provide sufficient written description support such that “the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date.” Ariad Pharms., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1351 (Fed. Cir. 2010) 35 U.S.C. § 112(a) also requires that such description of the invention in the patent “enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same.”
Compliance with the requirements of § 112(a) for AI patents presents other challenges, such as drafting a patent specification to show possession of an algorithm that is expressly programmed to change after its initial design and enabling a person of ordinary skill in the art to arrive at the claimed invention. For that matter, how can “a person of ordinary skill in the art” be defined for an invention that may arguably be initiated by a deep learning algorithm? Who is the “person” who invents a novel and useful AI algorithm that was not drafted or even supervised by a human being? And who owns the AI-invented invention?
The PTO published requests for comment on these and related questions on August 27, 2019 and October 30, 2019. At least as of this writing, however, the EPO, UKIPO, and PTO have all rejected claims of inventorship by AI in a series of applications by Stephen Thaler, who claimed that the inventor is a “creativity machine” named “DABUS.” In the PTO’s decision regarding one of these applications, U.S. Patent Application No.: 16/524,350, titled “Devices and Methods for Attracting Enhanced Attention,” the PTO relied on the Federal Circuit’s opinion in Univ. of Utah v. Max-Planck-Gesellschaft zur Forderung der Wissenschaften e.V., wherein the Federal Circuit stated that “inventors” must be natural persons, not corporations or sovereigns. 734 F.3d 1315, 1323 (Fed. Cir. 2013).
Challenges to Regulating AI/ML-based SaMDs
Despite the U.S. government’s apparent commitment to prioritizing AI research last year, the very nature of ML, and the “black box” of DNNs, raises important questions regarding the use of and reliance on AI products. If the route from a DNN’s input to output is not discernible, how does a human being ultimately trust the accuracy of its output? How can the DNN be held accountable for its outputs, or who should be held accountable for the DNN’s outputs if they are wrong? Even if we want to hold a DNN accountable, how can we ensure that the output of a DNN is reliably accurate? Given that data accumulation is necessary to maximize the accuracy and reliability of a DNN’s output, how should personal data be protected? How can algorithm bias be avoided or minimized? How frequently should algorithm updates be monitored, and when should an algorithm update qualify as a new device requiring new market authorization? What safeguards can be put into place to avoid cybersecurity breaches and data manipulation to alter outputs?
These issues are especially important in the context of medical devices, where, as Eric Topol reported in Nature Medicine, there are few peer-reviewed publications, and yet FDA approvals of SaMDs have accelerated.4 The next section discusses how the three countries with the highest numbers of AI patents—and thus arguably at the forefront of AI application and commercialization—are approaching the regulation of AI as medical devices.
A. United States
Given the increasing use of and reliance on DNN outputs, Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) and Representative Yvette D. Clarke (D-NY) introduced the Algorithmic Accountability Act on April 10, 2019, which attempted to help solve for unpredictability and potential bias in DNN decision-making: It would authorize the Federal Trade Commission (FTC) to create regulations for companies to conduct impact assessments of automated decision systems, so that companies would be required to assess and correct their use of automated decision systems for impacts on accuracy, bias, privacy, and security. Senator Wyden pointed out that “[i]t’s a fundamental way in which decisions are made now—algorithms and computers… And it seems to me that there’s not much transparency, not much disclosure, and that’s what we sought to do in our bill.” Further action on this bill has not yet been taken; however, as discussed above, it may be impossible to completely guarantee the accuracy of the output of any given AI algorithm.
The FDA, on the other hand, seems to be moving in a more nuanced direction with AI/ML-based SaMDs. The FDA has proposed a regulatory framework for such SaMDs that minimizes regulatory intervention, employing a risk-based approach in regulating post-marketing AI/ML-initiated updates only when an update or modification “significantly affects device performance, or safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the SaMD algorithm.” Recognizing that “[t]he traditional paradigm of medical device regulation was not designed for adaptive AI/ML technologies, which have the potential to adapt and optimize device performance in real-time to continuously improve healthcare for patients,” the FDA introduced the concept of a “total product lifecycle (TPLC) regulatory approach” in the hopes of “facilitat[ing] a rapid cycle of product improvement and allows these devices to continually improve while providing effective safeguards.”5
Despite the FDA’s recognition of the unique nature of AI/ML-based SaMDs, comments submitted by the public on the FDA’s proposed framework emphasized concerns that the FDA had not fully acknowledged how “starkly different” adaptive, continuously learning algorithms are and that they must be differently treated from “locked” algorithms. The American Medical Informatics Association submitted a comment that suggested that “many AI/ML-based SaMD are intended to perform continuous updates based on real-time or near-real-time data and that the algorithms will constantly adapt as a result,” and therefore evaluations of adaptive algorithms should occur on a set periodic basis, not just when a human recognizes that major modifications to the software have been implemented. It may be that AI/ML-based technologies are still too nascent for the FDA to establish an entirely new framework by which to evaluate them. The FDA may have to wait and see how these devices learn from the data they gather, and learn with them on how they should be regulated.
B. China and Japan
Together with the United States, China and Japan account for 78% of total AI patent filings; of the top companies filing AI-related patents, 12 are based in Japan, 3 in the United States, and 2 in China.3 In 2017, both China and Japan articulated their intent to take and maintain leadership in AI research. China issued a “New Generation of Artificial Intelligence Development Plan” in July 2017, declaring its goal of becoming a global innovation center in this field by 2030. Given that Chinese patient populations are much larger and privacy regulations are less stringent in China than in the United States, gathering medical data for A.I. evolution is much easier for Chinese AI companies, and China may indeed wind up with more reliable and accurate AI than U.S. companies for that reason.11
Japan issued its own plan in March 2017, titled the “Artificial Intelligence Technology Strategy.” The Japanese Government is planning to invest $100 million in the next five years to boost the development and adoption of AI technologies in hospitals, including a goal of opening 10 AI hospitals by the end of 2022, so that doctors can concentrate on patients while AI can analyze data and manage paperwork. However, to strengthen privacy safeguards while encouraging businesses to mine personal information to promote machine learning, Japan also revised its personal data protection law in March 2017, requiring that personal data be “anonymized,” but allowing third parties to use that data without individual consent.
Like the United States, both China and Japan are still working out how to regulate the use of and reliance on AI/ML-based SaMDs. In 2018, the Japanese ministry of health announced its intent to develop guidelines to expedite the use of AI in health and to evaluate the safety and efficacy of AI-based medical equipment. In February 2019, China’s Center for Medical Device Evaluation issued evaluation guidelines for AI/ML-based medical devices. However, without unified industry standards for risk classification and evaluation of medical AI devices, few products can be approved for marketing.
The Future
As the FDA recognized in its own regulations, current regulatory frameworks are not well suited for adaptive technologies capable of learning and evolving without human intervention, and these technologies may be difficult to regulate given the “black box” of a DNN’s decision-making process. The same is true with respect to the traditional paradigm of patent law. A new field of AI research, termed “X.A.I.” or “explainable AI,” seeks to teach AI algorithms to explain their own processes and may help with the regulatory and patent challenges presented by current versions of AI/ML-based devices. Until then, however, current trends seem to suggest that we will see more oversight and regulation as governments press forward to lead the way in AI research.
References
- WIPO Technology Trends 2019 at 14. Retrieved from https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf.
- McCarthy, J. (2007) “What is Artificial Intelligence?” (2007). Retrieved from http://jmc.stanford.edu/articles/whatisai/whatisai.pdf
- WIPO Technology Trends 2019: Artificial Intelligence. World Intellectual Property Organization. P. 13–14. Retrieved from https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf.
- Topol, E. High-performance medicine: the convergence of human and artificial intelligence. Nature Med. 25:44-56 (2019).
- FDA. (April 2019). “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AL/ML-Based Software as a Medical Device (SaMD)”. Discussion Paper and Request for Feedback. Retrieved from https://www.fda.gov/downloads/MedicalDevices/DigitalHealth/SoftwareasaMedicalDevice/UCM635052.pdf.
- FDA. (January 2019) Software Precertification (Pre-Cert) Pilot Program. Retrieved from https://www.fda.gov/medical-devices/digital-health/digital-health-software-precertification-pre-cert-program.
- Kuang, C. (November 21, 2017). “Can A.I. Be Taught to Explain Itself?” The New York Times. Retrieved https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html
- According to one report, 24% of 175 federal court decisions invalidating patents under Section 101 in the two years after Alice relied upon the “mental steps” doctrine. Robert R. Sachs, The Mind as Computer Metaphor: Gottschalk v Benson and the Mistaken Application of Mental Steps to Software Inventions, Bilski Blog
(April 6, 2016). - Available at https://www.PTO.gov/about-us/news-updates/us-patent-and-trademark-office-announces-revised-guidance-determining-subject. The PTO further published an update on October 18, 2019 to its Patent Eligibility Guidance that, according to the PTO, does not change the Guidance, but is “intended to assist Office personnel in
applying the 2019 Patent Eligibility Guidance.” October 2019 Patent Eligibility Guidance Update, 84 Fed. Reg. 202 - See, e.g., February 5, 2019 Notice of Allowance for Deep Mind’s U.S. Pat. No. 10,346,741 at 3, stating “The rejection under 35 USC 101 has been withdrawn in light of the new 2019 PEG as the training of deep neural network using reinforcement learning is not directed to a judicial exception.”
- Simonite, T. (January 8, 2019). “How Health Care Data and Lax Rules Help China Prosper in AI”. Wired. Retrieved from https://www.wired.com/story/health-care-data-lax-rules-help-china-prosper-ai/?verso=true