From wrist to bedside: AI and new rules reshaping medical wearables

Medical wearables are no longer confined to step counts, calorie estimates, or generic wellness nudges. In the last two years, the category has moved decisively toward clinically relevant sensing, AI-assisted interpretation, and tighter regulatory scrutiny. That shift is visible in both product design and policy: the U.S. Food and Drug Administration said in January 2025 that it had authorized more than 1,000 AI-enabled devices through established premarket pathways, confirming that AI oversight is now a mainstream medical-device issue rather than a niche software question.

For biomedical engineers, clinicians, and healthcare technicians, the bigger story is not simply that wearables are getting smarter. It is that the rules around them are becoming more differentiated. Some wrist-worn tools are gaining flexibility under updated wellness guidance, while others are clearly entering regulated medical territory because they screen for disease, support remote patient monitoring, or feed clinical decision workflows. From the wrist to the bedside, AI is changing what wearables can do, and regulators are changing how those capabilities must be built, validated, and monitored.

Consumer familiarity helped wearables scale, but clinical relevance is what is now redefining the market. A large installed base matters here: a 2025 review of AI-enabled wearables noted that 44.5% of U.S. adults reported using at least one wearable health tracker during the pandemic-era surge. That level of adoption gives developers access to broader sensor ecosystems and larger behavioral datasets, while also increasing pressure to distinguish between casual health feedback and medically meaningful outputs.

Recent flagship examples show how quickly mainstream devices are moving toward screening and risk-notification roles. Apple’s 2024 FDA-cleared sleep apnea notifications on Apple Watch illustrated how a consumer wearable can become a triage surface for an underdiagnosed condition. Reporting at the time cited the American Medical Association estimate that roughly 30 million people in the United States are affected by sleep apnea and that most cases remain undiagnosed, which explains why wrist-based screening has attracted so much attention.

Samsung pushed the category further by stating in June 2025 that its Galaxy Watch sleep apnea feature was the first wearable feature of its kind to receive FDA De Novo authorization. Importantly, this was not framed as a vague wellness feature. Samsung positioned it as an over-the-counter medical application intended to detect signs of moderate to severe obstructive sleep apnea in adults aged 22 and older over a two-night monitoring period, under defined use conditions. That distinction captures the core evolution of the field: some wearable functions are now plainly medical, even when delivered through familiar consumer hardware.

The technical leap in modern medical wearables is not just improved sensing. It is the addition of AI layers that can interpret continuous streams of physiological data, identify patterns over time, and prioritize signals for action. Instead of simply displaying heart rate, motion, oxygen trends, or sleep metrics, newer systems are increasingly expected to classify risk, detect anomalies, and support next-step decisions for users or care teams.

This shift matters because traditional monitoring logic often depends on simple thresholds. A 2025 Nature Communications paper on continuous in-hospital deterioration prediction argued that existing monitoring approaches can rely on cutoff-based alerts that contribute to over-alerting and alert fatigue. By contrast, wearable biosensor data combined with deep learning may support earlier and more specific deterioration alerts. In practical terms, AI changes the economics of monitoring because continuous data streams can be triaged by models rather than escalated only when a single variable crosses a fixed line.

That same logic extends beyond the hospital floor. Cardiovascular monitoring remains one of the most mature “from wrist to bedside” use cases, and a 2025 npj Cardiovascular Health review described wearable cardiovascular systems as offering real-time, personalized, noninvasive continuous monitoring and early-warning capability. Still, the review also emphasized unresolved challenges around accuracy, personalization, and workflow integration. For engineering teams, this means AI does not eliminate design constraints; it adds another performance layer that must be justified alongside sensor fidelity, usability, and clinical relevance.

The phrase “from wrist to bedside” is no longer just a metaphor. Hospitals are now treating remote data capture as part of standard operational reporting. The 2025 American Hospital Association Annual Survey asks hospitals to report the number of patients monitored through remote patient monitoring, or RPM, and remote therapeutic monitoring, or RTM. That is a strong signal that wearable-linked monitoring models have moved beyond pilots into institutional infrastructure.

Published U.S. hospital data reinforce that point. A 2025 peer-reviewed study using AHA survey data from 2018 and 2022 examined hospitals offering RPM services for post-discharge and chronic-care use cases. The study described RPM as payer-reimbursed care that uses in-home or wearable devices to transmit measurements such as weight, heart rate, and blood pressure for clinician review. Its practical value was summarized clearly: these data “can enable clinicians to identify early markers of clinical deterioration and intervene before patients suffer hospitalization.”

Wearables are also moving closer to traditional bedside monitoring in acute care. The 2025 inpatient deterioration prediction study showed one of the strongest technical cases yet for wearable biosensors inside the hospital, where continuous sensing may fill gaps between episodic vital-sign checks. Even so, adoption should not be confused with maturity. A 2025 Sensors review on wearable ECG devices for Hospital at Home concluded that machine learning and deep learning are advancing cardiac monitoring, but important challenges remain around signal quality and broader validation, particularly in older adults and home-care environments.

One of the most important developments for wearable makers came in January 2026, when FDA revised its General Wellness guidance for low-risk devices. The update clarified the agency’s compliance policy for products that promote a healthy lifestyle and has been widely interpreted as creating more room for non-diagnostic wearable insights, provided the claims stay firmly within a wellness context. For product teams, that means user-facing language, intended use, and risk framing now matter as much as the underlying algorithm.

The central issue is the boundary between a wellness output and a medical claim. Recent legal analyses of the revised guidance noted that some non-invasive physiological outputs from wearables may qualify as general wellness outputs if they are intended solely for wellness purposes and are not tied to diagnosing, curing, mitigating, preventing, or treating a disease. In other words, the same sensing stack could sit in very different regulatory buckets depending on whether it says “supports healthy sleep habits” or “detects signs of sleep apnea.”

This split is reshaping product strategy across the sector. Low-risk wellness wearables may now have more freedom to deliver health insights without full device regulation, but that flexibility disappears when manufacturers move toward disease screening, over-the-counter medical use, or clinician-facing monitoring. The policy trend of 2025 and 2026 is therefore not simply more AI wearables. It is more differentiated oversight: greater flexibility for wellness claims, stricter expectations for medical claims, and much closer scrutiny of actual AI-enabled medical devices.

As AI-enabled wearables become more clinically consequential, FDA is putting more emphasis on what happens after authorization. In its 2025 request for public comment on measuring and evaluating AI-enabled medical devices, the agency highlighted the need to monitor how models behave in real-world use as clinical practice, patient demographics, data inputs, and infrastructure change. That is especially relevant for wearables, where devices generate continuous data outside tightly controlled environments and across highly variable user populations.

The agency explicitly warned that static testing is not enough. FDA wrote that “ongoing, systematic performance monitoring is increasingly recognized as relevant to maintaining safe and effective AI use” in real-world deployment. It also emphasized the importance of detecting and mitigating data drift and model drift. For biomedical engineers, this shifts validation from a one-time submission activity to an ongoing lifecycle discipline that includes postmarket analytics, version control, feedback loops, and predefined risk responses.

FDA’s AI-device inventory is also evolving. The agency says it is updating how its list identifies devices that incorporate foundation models and large language model functionality, so clinicians and patients can better recognize when those capabilities are present. That matters because some wearable ecosystems now combine physiological sensing with conversational interpretation layers, summaries, or coaching interfaces. Once language models enter the stack, transparency, traceability, and human factors become even more important.

For companies selling wearable products internationally, the European operating environment is becoming just as important as the U.S. pathway. A 2025 analysis in npj Digital Medicine described the EU AI Act as the world’s first comprehensive AI law. For regulated digital medical products under the EU MDR or IVDR, obligations linked to their AI systems apply on a staged timeline, creating an additional compliance layer on top of conventional medical-device law.

The timing details matter. The same analysis noted that general-purpose AI obligations began applying in May 2025, while certain high-risk AI obligations phase in later depending on the type of system. That staggered rollout is particularly relevant for manufacturers combining consumer wearables, clinical software, and generative-AI components in a single ecosystem. A company may be dealing simultaneously with product safety, software lifecycle controls, postmarket surveillance, data governance, and AI-specific obligations.

Europe is also streamlining some device-administration rules as healthcare becomes more digital. On June 25, 2025, the European Commission announced that healthcare professionals in the EU can receive instructions for use for medical devices electronically rather than only on paper. While that change may seem administrative, it reflects a broader direction of travel: digital delivery, lower friction in implementation, and a regulatory framework that is trying to support innovation without abandoning accountability.

For design and development teams, the practical takeaway is that wearable success now depends on aligning three things early: sensing quality, intended use, and regulatory positioning. If a device is meant to stay in the general wellness lane, claims must be disciplined and interfaces must avoid implying diagnosis or treatment. If the product is intended to support screening, risk notification, or clinician review, then teams should expect more rigorous evidence requirements, stronger documentation, and postmarket performance plans from the outset.

Clinical teams should prepare for wearable data to become a routine workflow input rather than an optional add-on. The AHA’s 2026 environmental scan says wearable technology will be incorporated into 60% of patient records by 2027, indicating that patient-generated sensor data is expected to flow increasingly into formal care systems. That raises familiar but now urgent questions: which measurements are trustworthy, how should they be escalated, who responds to alerts, and how do teams prevent alarm burden while still capturing early deterioration?

Finally, both engineers and providers should resist the temptation to equate AI output with clinical truth. The strongest recent literature supports cautious optimism: wearables can extend observation windows, improve access to screening, and support earlier intervention, but they still face limitations in signal quality, validation across populations, and integration into real care pathways. The future belongs not to the flashiest wrist sensor, but to systems that can show reliable performance, transparent claims, and safe operation from first use through full lifecycle monitoring.

Medical wearables are entering a new phase in which consumer-scale hardware, medical-grade ambitions, and AI-enabled interpretation are converging. The result is a market that is simultaneously expanding and becoming more segmented. Some products will remain lifestyle tools with richer insights under the updated wellness framework. Others will move deeper into regulated care as they support screening, remote monitoring, and inpatient deterioration detection.

For the biomedical engineering community, that convergence creates both opportunity and responsibility. The opportunity is to build systems that turn continuous physiological data into timely, clinically meaningful action. The responsibility is to do so under rules that increasingly demand transparency, validation, and real-world monitoring. From wrist to bedside, the winners will be the teams that treat AI not as a marketing layer, but as part of a full medical-device lifecycle.


1- U.S. Food and Drug Administration. (2025). Artificial intelligence-enabled medical devices.
https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices

2- Author(s). (2025). AI-enabled wearable devices in healthcare: Applications and challenges. npj Digital Medicine.
https://www.nature.com/articles/s41746-025-01554-w

3- Author(s). (2025). Applications of wearable devices in predictive healthcare using AI.
https://link.springer.com/article/10.1007/s42452-025-07783-8

4- Author(s). (2024). Artificial intelligence-based wearable systems for sleep apnea detection: A systematic review.
https://pubmed.ncbi.nlm.nih.gov/39255014/

5- Author(s). (2024). The European Union Artificial Intelligence Act and its implications for healthcare. npj Digital Medicine.
https://www.nature.com/articles/s41746-024-01232-3

6- Author(s). (2024). Risk classification and regulatory challenges under the EU AI Act. npj Digital Medicine.
https://www.nature.com/articles/s41746-024-01116-6

7- Author(s). (2025). Lifecycle regulation of AI-based medical devices under EU MDR and AI Act.
https://link.springer.com/article/10.1186/s13244-025-02146-8