The New Default. Your hub for building smart, fast, and sustainable AI software
Table of Contents
Key takeaways:
The way you design your IoMT software architecture (medical device firmware and embedded software medical devices rely on, BLE healthcare connectivity or cellular protocols or cloud logic and OTA updates medical devices need in the field) directly determines how your product will be classified, validated, and maintained.
Decisions about where clinical logic lives, how data flows, and how device security healthcare IoT systems require is built into the stack are not implementation details. They define your regulatory pathway, your risk model, and your ability to scale.
Ultimately, the biggest challenges in connected health device software do come from proving that the system works safely, reliably, and in compliance with IoMT regulatory requirements.
The potential connected medical devices in digital health is a no-brainer by now as market projections from McKinsey & Company and PwC consistently (and for years) point to connected care as a driver of cost reduction and improved clinical outcomes.
That potential is visible in practice with remote patient monitoring programs that reduce hospital readmissions in chronic care. Or wearable cardiac devices enable earlier detection of arrhythmias, and connected respiratory tools that support continuous management of conditions like COPD outside clinical environments.
But translating the high-level understanding into a product architecture that can be actually built and approved is a constant struggle. Why?
Because recognizing the value of continuous data and connected care is one thing, while defining a repeatable architectural approach that consistently delivers it in a regulated environment is another.
Unlike conventional software, connected medical devices cannot be shaped primarily through iteration at the application layer. They require a system-level design from the outset, defining how firmware, connectivity, data processing, and clinical logic interact, and how each of those components will be verified, validated, and documented.
In regulated contexts, architecture determines classification, evidence requirements, and the constraints on post-market change. Without a defined architectural pattern early on, teams are forced to reconcile product decisions with regulatory expectations retroactively, typically through system-level rework, where changes are both costly and time-constrained.
In this article, we break down the core architectural decisions behind connected medical devices. From system layers and software boundaries to connectivity, security, and regulatory implications. We’ll also explain how each of them shapes what can be built, approved, and maintained in practice.
What are the four layers of a connected health device?
A connected medical device is not a single product in the sense typically used in software development, where one application, even if internally complex, is delivered, deployed, and versioned as a coherent unit.
In IoMT, what appears as one product is, in fact, a distributed system composed of multiple interdependent components: software running on constrained hardware, a communication layer that governs how data moves across environments, backend infrastructure that processes and stores regulated data, and an application layer that exposes functionality to users.
These components are developed, validated, and, in some cases, regulated under different assumptions.
For clarity, this architecture can be described as four layers:
Firmware and embedded software – code running directly on the device, controlling sensors, actuators, and local processing
Edge and gateway layer – the communication bridge that transfers data from the device to external systems
Cloud ingestion and processing – infrastructure responsible for receiving, storing, validating, and analyzing data
Clinical application layer – the interface through which clinicians or patients interact with the system
NOTE: Real-world implementations may collapse or further subdivide these boundaries, but it is a useful model for understanding where critical decisions are made and how they propagate through the system.
Each layer introduces its own technology stack, failure modes, and regulatory exposure. Crucially, the decisions that define system behavior are not concentrated at the interface level. While application development in any domain is driven by business logic rather than UI, in connected medical devices the primary constraints sit deeper in the stack.
Choices made at the level of firmware and connectivity, such as where data is processed, how reliably it can be transmitted, or how the device behaves in failure scenarios, establish the conditions under which higher-level logic can operate.
These decisions are typically made early, and they directly influence safety classification, the scope of regulatory review, and the long-term cost and feasibility of maintaining the system.
The sections below examine each of these layers in turn, what role it plays in the system, and how the decisions made at that level shape the product as a whole.
What does each layer actually do?
Firmware and embedded software
This is the code that runs directly on the device’s microcontroller. It controls the hardware itself: photodetectors in a pulse oximeter, motors in an insulin pump, or sensors in a wearable. For teams coming from web or mobile development, this layer is often unfamiliar, as it operates under strict constraints (limited memory, processing power, and energy) and is responsible for core functions such as signal acquisition, local processing, and power management.
Under IEC 62304 (the international standard governing medical device software lifecycle processes), this software is classified by safety: Class A, B, or C, based on the potential for patient harm if it fails.
This is also the layer where the boundary between a general digital health product and a regulated medical device often begins to take shape.
When software directly controls hardware, influences how physiological signals are captured, or performs clinically relevant processing at the point of measurement, it becomes part of a safety-critical system rather than a standalone digital tool.
Edge and gateway
This layer defines how data leaves the device and reaches the internet.
In practice, three architectural patterns dominate:
Smartphone as gateway (BLE) - Leveraging hardware the patient already owns is the most cost-efficient and works well in scenarios where the patient is actively engaged with the device, for example, connected wearables such as the Elvie Pump or Remedee Labs’ therapeutic device. Yet, it also has a severe limitation which is dependency on external variables: Bluetooth can be disabled, mobile operating systems may restrict background activity, and connectivity is tied to user behavior.
Dedicated hub - A separate in-home device aggregates data from one or more sensors and transmits it to the cloud automatically. This model reduces reliance on patient behavior and is particularly effective in elderly care or multi-device monitoring setups, but the trade-off is an increased system complexity: an additional hardware component that must be manufactured, certified, maintained, and supported.
Cellular-embedded (LTE-M / NB-IoT) - Connectivity is built directly into the device, enabling continuous data transmission independent of smartphones or local networks. This approach is the most robust for critical monitoring or use in low-connectivity environments; however it comes at a cost: higher unit economics, ongoing data fees, and increased power consumption, which can significantly impact battery design and device form factor.
The choice of gateway architecture determines how consistently data can be transmitted, what happens when connectivity is lost, and which parts of the system are considered safety-critical.
From a regulatory perspective, this affects how data integrity is demonstrated (e.g., completeness and continuity of transmitted data), how system dependencies are documented (such as reliance on third-party devices like smartphones), and how failure modes are handled in risk analysis.
Cloud ingestion and processing
Unlike a typical application backend, where interactions are discrete and occasional inconsistencies can often be tolerated, in healthcare this layer handles continuous, time-series health data that may carry clinical significance at any given point.
As a result, systems must ensure not only availability, but also completeness, correct ordering, and traceability of data across its lifecycle. Gaps, delays, or inconsistencies are not just technical issues; they can affect clinical interpretation and must be explicitly addressed in system design and validation. From the moment data is generated, it is also subject to regulatory frameworks such as HIPAA and GDPR, which require strict controls over security, access, and auditability.
This is also where the boundary between infrastructure and regulated software becomes critical.
In layer 1, that boundary begins when software directly interacts with hardware, controlling how physiological signals; in the cloud layer, the same boundary is crossed for a different reason. If cloud-side logic goes beyond storage and transmission and begins to interpret data in a clinically meaningful way, it may qualify as Software as a Medical Device (SaMD), with its own regulatory requirements.
Clinical application
This is the interface exposed to clinicians or patients: dashboards, alerts, reports, and timelines.
It is the most visible layer, and typically the most familiar to product teams. Its impact is operational rather than structural, but in practice, its quality directly affects clinical outcomes. Poorly designed alerting systems contribute to alert fatigue, where high volumes of low-priority notifications obscure critical signals. In clinical environments, this is not a usability issue but a safety risk.
Firmware, embedded software, and cloud-side logic. What’s the difference?
Connected medical devices combine three distinct types of software, each operating under different technical constraints and subject to different regulatory expectations.
Layer | What it does | Example | Maintenance |
Firmware | Low-level code running on the microcontroller, directly controlling hardware (sensors, timing, power) | Controls LED emission and signal capture in a pulse oximeter | Rarely updated; designed to be stable and tightly controlled |
Embedded software | Processes raw sensor data and executes device-level logic (thresholds, alerts, control logic) | Calculates oxygen saturation and triggers a local alarm | Can be updated (e.g., via OTA), but constrained by device resources |
Cloud-side software | Handles data ingestion, storage, analytics, and integrations at scale | Analyzes historical patient data to detect trends or predict events | Continuously updated without touching the physical device |
Why does this matter for founders?
Software classification under IEC 62304 is driven, as we said, by the potential impact of failure, but how it applies in practice depends on where clinical logic is implemented.
If clinical logic (the computation that influences diagnosis, monitoring, or treatment) is implemented on the device, the corresponding firmware or embedded software may be classified at a higher safety level (e.g., Class C). This significantly increases the regulatory burden, requiring detailed traceability, unit-level verification, and rigorous validation of hardware–software interaction.
If that same logic is implemented in the cloud, the regulatory focus shifts. The device itself may be simpler, but the cloud component must now demonstrate data integrity, system reliability, and cybersecurity controls consistent with a regulated software system.
In practice, the choice defines how the system will be evaluated, what risks must be mitigated, and how the product can evolve over time.
Placement of clinical logic | Advantages | Trade-offs |
On-device logic | Works without connectivity; ensures critical functions remain available at all times | Higher regulatory scrutiny on the device; more complex validation, especially at the hardware–software boundary |
Cloud-side logic | Easier to update, monitor, and scale; enables advanced analytics and AI | Dependency on connectivity; stricter requirements for data integrity, availability, and security |
How connectivity protocols compare: BLE, Zigbee, LoRa, and Cellular?
Connectivity is not the core of an IoMT product but it is the layer that determines whether data can move reliably between the device, the cloud, and the clinical interface. If that flow is disrupted, the rest of the system, such as analytics, alerts, and decision support, cannot function as intended.
In healthcare, this makes protocol choice fundamentally different from other IoT domains. Connectivity is not evaluated only in terms of efficiency or cost, but in terms of how reliably clinically relevant data can be delivered under real-world conditions, and what happens when it is not.
This section focuses on the four main connectivity options and their real trade-offs in medical contexts, where the evaluation is shaped by constraints that do not exist in typical IoT systems.
In IoMT:
the data path is part of the regulated system
patient behavior becomes a system dependency (e.g., carrying a phone, keeping Bluetooth enabled)
connection loss is not just a technical failure but an issue that can degrade or invalidate clinical value
Protocol | Power | Best use case | Where it breaks |
BLE | Very low | Wearables with smartphone as gateway | Fails if phone isn’t present or OS restricts background activity |
Zigbee | Low | Multi-device home monitoring (elder care) | Requires hub; no direct internet access |
LoRa | Very low | Rural, low-frequency data (periodic check-ins) | Not suitable for continuous or real-time data |
Cellular (LTE-M / NB-IoT) | Higher | Always-on remote monitoring without gateway | Higher cost and power usage |
What does 'Secure by Design' mean at the hardware-software boundary?
Security in connected medical devices has moved from a technical concern to a regulatory priority, largely in response to how failures have played out in practice.
The Change Healthcare ransomware attack exposed just how interconnected modern healthcare systems have become. A single compromised entry point disrupted claims processing nationwide, affecting around 190 million individuals and forcing providers to delay care, switch to manual workflows, or absorb financial losses. It is not an isolated case. In 2023 alone, the HCA Healthcare breach exposed data from over 11 million patients, while the PharMerica ransomware attack affected nearly 6 million individuals.
These are recurring, large-scale failures across different parts of the healthcare system, from providers to infrastructure platforms.
Which four security decisions must happen before development starts?
Secure boot
Secure boot ensures that a device only runs software from a trusted source. Before any application code is executed, the system verifies that the firmware has not been altered, typically using cryptographic signatures. If the verification fails, the device refuses to start. Without this mechanism, anyone with physical or remote access could replace the firmware with a modified version, for example, one that alters measurements, disables safety checks, or silently transmits data elsewhere. Once malicious firmware is installed, higher-level protections such as encrypted communication become irrelevant, because the system has been compromised at its most fundamental level.
Encrypted transmission
All data in transit must be encrypted both between the device and its gateway (e.g., over Bluetooth) and between the gateway and the cloud. Without encryption, data can be intercepted or modified in transit. In healthcare, this is not just a privacy issue. If physiological data is even slightly altered it can lead to incorrect clinical interpretation. Encryption ensures that data arrives exactly as it was sent, and only to intended recipients.
Device authentication
Each device in the system should have a unique, verifiable identity, typically implemented using certificates. When devices share credentials, a single compromise can escalate into a system-wide breach. In many IoT deployments, devices ship with identical or default credentials, which attackers can discover and exploit at scale. Once one device is accessed, the same credentials often provide access to others, enabling lateral movement across the entire fleet.
Minimal attack surface
Every exposed interface is a potential entry point, whether it’s a debug port, a communication protocol, or a background service. In fact, many successful attacks take advantage of what was left open unnecessarily: unused interfaces, enabled by default or forgotten during development, create easy entry points. Reducing the attack surface by disabling anything that is not essential limits how an attacker can interact with the device.
What do regulators now expect?
Since 2023, the U.S. Food and Drug Administration has made cybersecurity a formal requirement in premarket submissions for connected medical devices, codified under Section 524B of the FD&C Act. The shift reflects a specific assumption: devices will evolve over time, vulnerabilities will emerge, and the manufacturer must be prepared to respond.
That expectation plays out across three concrete areas.
Software Bill of Materials (SBOM)
An SBOM is a complete inventory of every software component inside a device: proprietary code, third-party libraries, open-source packages, and system-level dependencies.
The reason regulators require it is practical. Modern devices are built on layers of software that the manufacturer does not fully develop or control. When a vulnerability surfaces in one of those underlying components, the first question is always the same: Do we use this anywhere in our stack?
Without an inventory, answering that question means manually searching codebases and coordinating across teams, a process that can take days or weeks. With an SBOM, it takes hours. Log4Shell made this visible at scale: a severe flaw in the Java logging library Apache Log4j affected countless applications, many of which included it indirectly through nested dependencies. Organizations without component-level visibility had no fast way to assess their exposure.
Cybersecurity management as an operational capability
Regulators also expect a defined cybersecurity management plan, and they evaluate it as a living process instead of a static document. The logic is straightforward: vulnerabilities are inevitable, so the regulatory question is whether the manufacturer can detect, assess, fix, and deploy a response in a controlled way.
In practice, this means demonstrating how vulnerabilities are monitored across the device fleet, how their clinical impact is assessed, how patches are developed and validated, and how updates reach devices already in the field. Traceability runs through all of it, because proving that a fix was applied matters as much as applying it.
This operational layer also intersects with broader data protection frameworks. HIPAA and GDPR define how patient data must be processed, stored, and protected in connected systems, and cybersecurity management is where those obligations become technical.
Why all three security layers depend on each other?
These requirements are interdependent.
Without a clear update mechanism, vulnerability response stays theoretical. Without device-level authentication and software traceability, identifying which devices are affected becomes guesswork.
Without auditability, there is no way to demonstrate that a fix was actually delivered. Regulators evaluate the full chain, and a gap in any one area weakens the credibility of the rest.
How do these decisions shape your regulatory pathway?
The architectural decisions described above, where clinical logic lives, how devices connect, how software is updated, and how the system is secured, are not reviewed in isolation. Regulators evaluate them as a system, and together they determine what must be proven, documented, and maintained over time.
Where does clinical logic live?
Where clinical logic is implemented determines what regulators treat as the medical device, and therefore what must be validated and submitted.
On-device (firmware / embedded software): Regulators focus on hardware-software integration and real-time device behavior. Required evidence includes firmware safety classification (IEC 62304 Class A/B/C), validation of sensor accuracy and timing constraints, defined behavior under failure conditions (sensor errors, power loss), and device-level verification through bench testing, integration testing, and hardware validation.
Cloud-side (SaMD): Regulators focus on algorithm performance, data processing, and software lifecycle. Required evidence includes SaMD qualification, clinical validation of algorithms (sensitivity, specificity, false positive rates), version control and change management processes, and traceability of updates and revalidation over time.
The difference is structural. On-device logic produces a device-centric submission focused on hardware behavior and embedded software. Cloud-side logic shifts the submission toward algorithm performance, data handling, and lifecycle processes.
Can the system be updated after deployment?
Regulators already assume that vulnerabilities and defects will surface after deployment. What they evaluate is whether the manufacturer can respond in a controlled way.
Without OTA capability, updates require physical access to every device. That means slow response times, limited ability to patch vulnerabilities at scale, and potential field recalls for issues that could otherwise be resolved through software fixes.
With OTA, regulators expect version control across the fleet, verification that updates install correctly, rollback mechanisms for failed deployments, and staged rollout across device subsets. This is now the expected standard. Without it, manufacturers cannot demonstrate post-market control.
How does connectivity shape failure mode evidence?
Different connectivity architectures create different failure modes, and regulators evaluate them accordingly.
Smartphone gateway (BLE): Failure modes are human-dependent: the app gets closed or backgrounded, Bluetooth disconnects, data is lost or delayed. Regulators expect evidence of data buffering and sync logic, reconnection handling, and mitigation of user-dependent reliability risks.
Embedded cellular: Failure modes are infrastructure-dependent: network latency, packet loss, degraded signal. Regulators expect evidence of transmission reliability, end-to-end security, performance under network degradation, and uptime metrics.
What security evidence goes into the submission?
Regulators evaluate whether security controls can be proven effective and maintained over time. Each decision maps to specific evidence:
Secure boot: Documentation of how firmware is signed and verified at startup, plus test results confirming unauthorized software cannot execute on the device.
Encryption: Specification of communication protocols (e.g., TLS), key management description, and test evidence that data cannot be intercepted or altered in transit.
Authentication: Description of how devices are uniquely identified (e.g., certificates), how credentials are provisioned, and how unauthorized devices are blocked.
SBOM: A complete inventory of all software components, including third-party libraries, demonstrating that vulnerabilities can be identified and tracked over time.
Conclusion
To sum up:
IoMT software architecture determines not just how a product works, but how it is regulated and validated
Medical device firmware and embedded software decisions define safety classification and testing scope
Connectivity choices (e.g., BLE healthcare vs. cellular) shape risk models and regulatory expectations
OTA updates in medical devices are now a requirement for post-market control, not an optional feature
Device security in healthcare IoT must be designed into the system and proven through regulatory evidence
Connected medical devices are difficult because every architectural decision carries regulatory weight. In other domains, choices about data flow, connectivity, or system behavior are primarily technical trade-offs. In healthcare, those same choices determine how the product will be classified, what evidence must be produced, and how it can evolve after deployment.
This is why many teams encounter friction late in development. The system works as designed, yet fails to meet what regulators require. Gaps surface because the architecture was built without accounting for how it would be validated, documented, and maintained under regulatory scrutiny. By that point, resolving them means revisiting assumptions already embedded in both the architecture and the submission.
Building a connected medical device that functions correctly and building one that can be approved are two different engineering problems. The second demands domain awareness: an understanding of how design decisions translate into safety classification, regulatory evidence, and lifecycle obligations. Teams that account for this early move faster and avoid the late-stage rework that defines the cost and timeline of most IoMT projects.
Connected Health Device FAQ
)




