The U.S. Food and Drug Administration (FDA) has long been the gatekeeper for ensuring the safety and effectiveness of medical devices and therapies. But as artificial intelligence moves deeper into therapeutic applications, the agency is developing its approach. Real-time data, adaptive algorithms, and continuous learning systems present new regulatory challenges and new opportunities. At the center of these discussions is Joe Kiani, Masimo and Willow Laboratories founder, who is helping lead the charge in building AI tools that not only support daily health but also meet the highest standards of trust and accountability.
Nutu™, the latest innovation from Willow Labs, is a platform that uses metabolic and behavioral data to offer personalized health insights, showing how digital health companies can meet the needs of the digital health industry. As AI continues to reshape care, developers and regulators are working together to define how therapeutic platforms can be both innovative and safe.
Why Oversight Is Shifting
Traditional FDA pathways were built for static products, devices and drugs that don’t change after approval. AI therapeutics challenge that model. These tools often update continuously, learning from new data and developing their recommendations in real-time.
This adaptability is part of what makes AI powerful. But it also raises questions. How do you validate a product that changes? When should updates be reported to regulators? And what safeguards must be in place to reduce the risk of unintended consequences? The FDA is responding with new frameworks designed specifically for Software as a Medical Device (SaMD). These frameworks aim to support rapid innovation while maintaining rigorous standards.
The Role of Real-World Evidence
One of the biggest shifts in FDA oversight is the growing use of Real-World Evidence (RWE) in regulatory decisions. Instead of relying solely on pre-market trials, the FDA is now looking at how AI tools perform in actual use across diverse populations, environments, and use cases. Platforms are built with this kind of continuous data in mind. The system learns from user behavior, adapts to lifestyle changes, and tracks outcomes over time. That real-world feedback loop is not only good for users, but a critical part of proving value and safety to regulators. Collecting anonymized engagement and outcome data helps demonstrate how its platform performs in everyday life, not just in a lab or controlled setting.
The Push Toward Transparency and Explainability
Another major focus for the FDA is explainability. In the clinical setting, recommendations must be understandable to both patients and providers. AI that can’t explain how it concluded may undermine confidence and increase risk. That’s why developers are being encouraged to design transparent algorithms and build tools that help users understand what the system is doing and why.
Nutu offers this through clear prompts, user-friendly visualizations, and context-aware recommendations. Rather than issuing commands, it offers suggestions based on trends the user can verify. This approach makes the platform more aligned with the FDA’s emphasis on interpretable systems.
Joe Kiani, Masimo founder, explains, “Some of the early users that have been giving us feedback are saying really positive things about what it’s done for them.” This commitment to user empowerment reflects a core principle of responsible AI transparency. In a changing regulatory environment where explainability and user trust are paramount, building platforms that clarify how insights are generated isn’t just a thoughtful design, but a strategic necessity.
Continuous Learning Systems and Precertification
The FDA has begun exploring a precertification model for software-based therapeutics. Instead of reviewing every version of a product, this model evaluates the developer’s processes, quality systems, and ability to manage risk. Companies that demonstrate strong internal controls could be granted faster review pathways, allowing them to iterate more quickly while staying accountable.
This model makes sense for a platform that develops based on user behavior. It conducts internal audits, stress tests, updates, and reviews all new features through ethical and clinical lenses. These practices align well with the FDA’s goals for AI oversight, like proactive safety without slowing progress.
Clinical Validation Still Matters
While AI is dynamic, clinical validation remains essential. The FDA continues to expect clear evidence of therapeutic impact, whether through controlled trials, observational studies, or post-market surveillance. Startups hoping to gain regulatory approval must invest in outcomes research and establish how their platforms improve user health over time.
Nutu incorporates these principles by analyzing how users’ blood sugar stability, sleep patterns, and stress levels improve with consistent use. These insights help validate the platform’s therapeutic potential and show that digital guidance can yield measurable health improvements.
Risk Management for AI Tools
AI models must also be tested for failure modes, such as what happens when data is missing, inputs are contradictory, or user behavior deviates from patterns. The FDA wants to know how platforms handle uncertainty and communicate risk to users. Nutu builds safeguards into its feedback system. If a recommendation lacks sufficient confidence, the platform adjusts its tone or provides alternative options. This thoughtful design reflects a growing expectation that AI tools must not only offer insight but also manage risk responsibly.
Ethical and Social Considerations
Beyond technical and clinical standards, FDA oversight is beginning to acknowledge ethical dimensions, such as equity, accessibility, and bias mitigation. Platforms that perform poorly across different populations can face increased scrutiny. Developers must demonstrate that their systems are fair, inclusive, and tested on diverse user groups.
These practices align with regulatory guidance and support long-term trust. AI therapeutics represent one of the most exciting frontiers in health care, but they also demand a new model of oversight. The FDA’s developing approach reflects the complexity of these tools and the need for standards that support both safety and speed.
By blending ethical design, scientific rigor and real-world usability, digital health tools are showing what AI in health care can and should look like. As regulators, developers and users align on the goals of transparency, safety and personalization, AI therapeutics cannot only grow but do so responsibly.