MTI Viewpoint
Insights shared by industry relative to healthcare and the advancement of medical technology.

Jonathan Burk, Software Engineering Director, Full Spectrum. Specializing in AI, cloud, and connected devices, Jon works with Full Spectrum clients using Gen AI to accelerate time-to-market for complex medical device projects. He has over 25 years of commercial software development experience including enterprise healthcare solutions at Foliage and leading cloud-native initiatives at Cloud Technology Partners.
AI has rapidly become a standard tool in medical device product development, offering the potential to dramatically accelerate documentation, testing, and design activities. But without formal policies and governance, MedTech organizations face risks to intellectual property, product quality, and ultimately patient safety. Thoughtful AI governance enables development teams to capture efficiency gains while maintaining the rigor that the industry demands.
The AI Productivity Opportunity Is Real
Documentation burden in MedTech is very real and can easily tax your development teams badly enough to threaten innovation. When the engineers tasked with bringing new products to market are hampered by paperwork, they lose momentum, burn time on tasks they may be ill suited for, and get lost in the smallest details.
AI is changing that. Software teams in MedTech and beyond have already embraced AI for project setup, code generation, and especially unit testing. With unit testing in particular, productivity gains, measured at 70% in a study by Diffblue, are eye popping. But it’s documentation where product teams can offload even more tedious and distracting work.
Gen AI excels at expanding on and reorganizing your thoughts to better fit a specific audience or format. Early in the development process, this can take the form of turning a meeting transcript or market research interviews into user needs. Later in the process, developers can use AI to turn a plainspoken description of how a system is intended to work into architecture and design documents. And the tedious task of building out test cases traced to requirements becomes fast, accurate, and painless.
These are not just efficiency gains. They represent a fundamental shift in where skilled engineers spend their time, enabling more focus on analysis, review, and innovation.
MedTech Is Primed for Responsible AI
The medical device development lifecycle is structured in a way that actually complements responsible AI use. Standards like IEC 62304 prescribe formal review cycles, stage gates, design reviews, and verification and validation activities. These are exactly the checkpoints at which human reviewers can examine, correct, and approve AI-generated artifacts before they advance to the next phase.
This is critical because AI, for all its productivity potential, is prone to hallucinations. Large language models (LLMs) can generate seemingly plausible but incorrect content, miss nuance, or produce outputs inconsistent with requirements. When errors are introduced in early-stage documents that will inform implementation and testing, defects can propagate through the entire development lifecycle.
This shouldn’t dissuade product teams from using AI. It is the reason why the right policies and governance are key. This is where the human-in-the-loop concept comes in. Every AI output must be reviewed, verified, and approved by a qualified engineer before it is treated as authoritative. In the context of IEC 62304 and FDA design controls, this is not an add-on; it is simply good engineering practice applied to a new class of tool. In other words, most MedTech firms already have the foundational processes in place to support responsible AI use.
Shadow AI
Perhaps your organization’s leadership hasn’t decided which tools are approved, what data can be shared with external AI services, or how AI-generated content should be identified and reviewed. If you don’t have a formal AI policy, your teams are probably living in the Wild West of AI.
Absent specific guidance and process, developers do what capable engineers always do: they find tools that help them work faster. This may include using consumer-grade AI assistants to draft documents, generate test cases, summarize requirements, and answer technical questions. Although there is no ill intent, they likely do this quietly either out of concern for personal consequences or ignorance of organizational risks.
This is the shadow AI pattern, and it is the worst possible outcome for a MedTech organization. When AI use is undisclosed:
- Intellectual property is exposed. Consumer AI services may use submitted content for model training. Proprietary design details, unpublished clinical data, and competitive information can leave the organization entirely.
- Quality controls are bypassed. AI-generated content that skips review can introduce errors into design documentation, test artifacts, and risk analyses — errors that may not surface until verification, validation, or worse, post-release.
- Regulatory exposure grows. If a regulatory submission or audit uncovers undocumented AI use in design artifacts, the organization may be unable to demonstrate adequate design controls.
- There is no organizational learning. When AI use is hidden, the organization cannot measure it, manage it, or improve it.
The uncomfortable truth is that withholding policy does not prevent AI adoption — it only ensures that adoption happens outside of any governance structure. The question for MedTech leaders is not whether their teams are using AI. It is whether they know about it.
Good AI Governance in MedTech
An effective AI governance framework for a medical device development organization does not need to increase the burden of existing processes. Its purpose is to enable confident, responsible AI use. A useful framework addresses four key areas:
Approved tools and data handling standards
Not all AI tools carry the same risk profile. Enterprise-grade AI plans offer stronger protection against IP leakage than free consumer-grade alternatives. Organizations should define which tools are approved for which categories of use and specify what categories of information may not be submitted to external services (for example, prohibiting use of PHI with AI).
Human-in-the-loop requirements
Governance policy should specify that AI-generated content is treated as a draft input, not a final artifact. In practice, this means AI outputs are subject to the same review and approval processes as any other authored content under the organization’s design control procedures. For medical device development this aligns naturally with the existing review structure. An AI-drafted software requirements specification should go through the same peer review and formal approval process as a traditionally-authored one.
Identification and traceability of AI contributions
Organizations should determine whether and how AI use is documented in the design history file or quality management system. While there is not yet a universal regulatory standard requiring disclosure of AI authorship in design records, building the habit of transparency now reduces compliance risk as regulations evolve. Some organizations adopt a simple convention: AI-assisted documents are flagged in the record with a note indicating that the initial draft was AI-generated and subsequently reviewed and approved by named individuals. Should a need for further scrutiny arise after the fact, this contains the scope of that effort.
Training and competency
Effective AI use is a skill. Developers who understand how to construct clear, specific prompts receive qualitatively better outputs than those who treat AI as a magic oracle. Governance should include baseline training on how to use approved tools effectively, including practical guidance on recognizing common AI failure modes such as hallucinated citations, confident but incorrect regulatory interpretations, and missing edge cases in generated test logic.
Enabling Confidence, Not Just Compliance
The framing of AI governance as a compliance burden misses the more important opportunity. Development teams that work within a clear, well-designed AI policy can use these tools confidently and openly. They can share what’s working, build organizational knowledge about effective practices, and improve over time. The time savings can be reinvested into higher-value activities: deeper design reviews, more thorough hazard analyses, earlier and more frequent testing.
In a regulated industry where development timelines directly affect when patients can access new therapies and technologies, that reinvestment matters. The goal is not to manage AI as a threat — it is to sharpen it as a capability.
Medical device development organizations that establish thoughtful AI policies now will be better positioned as the tools continue to mature, as FDA guidance on AI in development evolves, and as the industry’s collective understanding of best practices deepens. Those that wait are likely to find that their teams have quietly gone their own way.