For much of the past decade, small digital devices have operated under a quiet but consequential limitation. They could either remain aware of their surroundings or last long enough to be practical at scale. Rarely both. Motion sensors, wearables, environmental monitors, and embedded industrial nodes were designed to conserve power by sampling intermittently or by relying on distant cloud systems to interpret data after transmission. Intelligence, in this model, was episodic rather than persistent.
This constraint shaped how connected living evolved in practice. Homes became instrumented but reactive. Buildings were monitored, but often with delay. Wearables generated insight, but within narrow energy budgets that limited continuous awareness. These systems did not fail because they lacked intelligence, but because maintaining constant attention imposed operational costs that scaled poorly across millions of devices.
The scale of this limitation has become increasingly visible. IoT Analytics estimates that the global installed base of connected IoT devices reached 18.5 billion in 2024 and is projected to grow to 21.1 billion in 2025. Much of this growth is concentrated in low-power endpoints embedded in daily environments rather than in centralized infrastructure. As these devices proliferate, small inefficiencies in power use, data transmission, and maintenance accumulate into system-level constraints.
Global Growth of Connected IoT Devices with Scenario Projection (2019–2030)
| Year | Connected IoT Devices (Billions) | Year-over-Year Growth (%) |
|---|---|---|
| 2019 | 8.6b | — |
| 2020 | 9.7b | 12.8% |
| 2021 | 11.1b | 14.4% |
| 2022 | 13.1b | 18.0% |
| 2023 | 15.7b | 19.8% |
| 2024 | 18.5b | 17.8% |
| 2025 | 21.1b | 14.1% |
| 2026* | 23.8b | 12.8% |
| 2027* | 26.6b | 11.8% |
| 2028* | 29.5b | 10.9% |
| 2029* | 32.4b | 9.8% |
| 2030* | 35.0b | 8.0% |
*Projected values from IoT Analytics, IDC, and Ericsson. …..Source: IoT Analytics (observed data); Institute of Internet Economics synthesis for projections.
What is changing is not simply software efficiency, but the underlying logic of how intelligence is embedded at the smallest endpoints. A new class of event-driven, brain-inspired computing architectures makes it possible for devices to remain continuously aware while consuming only minimal power. Instead of transmitting raw data upstream or running full inference pipelines at all times, these systems detect meaningful changes locally and respond immediately.
This shift reflects a broader evolution in connected living. Intelligence no longer needs to announce itself through constant connectivity or visible interaction. It can operate quietly in the background, extending device lifetimes, reducing maintenance burdens, and improving responsiveness. For households, enterprises, and infrastructure operators, this translates into systems that are less brittle and more dependable.
Traditional vs Event-Driven IoT Architectures
| Dimension | Traditional IoT Architecture | Event-Driven / Neuromorphic Architecture |
|---|---|---|
| Processing Model | Clock-driven, continuous execution | Event-triggered, conditional execution |
| Power Behavior | Constant baseline consumption | Energy spent only on relevant events |
| Data Transmission | Periodic or continuous streaming | Selective, event-based signaling |
| Latency Profile | Dependent on cloud round trips | Immediate local response |
| Maintenance Impact | Frequent battery replacement | Multi-year deployment feasible |
Source: IoT Analytics; IEEE Spectrum; ITU
Architecturally, the implication is a gradual move away from the long-dominant “sense and transmit” model toward “sense and decide.” Devices increasingly perform first-level interpretation at the point of data generation, reserving cloud and edge platforms for coordination, learning, and long-term analysis rather than immediate reaction. As connected systems become embedded infrastructure rather than optional features, this rebalancing of intelligence emerges as a requirement for scale rather than an incremental optimization.
Why Always-On Intelligence Has Been So Difficult
For most connected devices, always-on intelligence has been constrained by a fundamental tradeoff rather than a lack of technical ambition. Designers historically faced two workable options, neither fully satisfactory. Devices could operate intermittently, waking only at set intervals to conserve power and extend battery life. Or they could remain continuously active, delivering richer awareness but consuming far more energy and sharply reducing operational lifespan.
This tradeoff explains the behavior of many familiar “smart” products. Battery-powered motion sensors check for activity periodically rather than remaining continuously attentive. Smartwatches sample physiological signals in bursts instead of maintaining constant monitoring. Remote industrial sensors collect data on a schedule, even though early signs of failure often emerge between sampling windows. These are not design oversights. They are deliberate compromises shaped by power economics.
Power Budgets for Always-On Tasks
| Task | Conventional MCU (mW) | Event-Driven / Neuromorphic (µW) | Efficiency Gain |
|---|---|---|---|
| Audio Detection | 10–30 | 400–600 | ~20–70× |
| Presence Detection | 15–40 | 500–700 | ~30–80× |
| Vibration Monitoring | 20–50 | 600–900 | ~20–60× |
Source: IEEE Spectrum; Innatera
The arithmetic behind these compromises is unforgiving. A common coin-cell battery such as a CR2032 stores roughly 0.6–0.7 watt-hours of energy. At a continuous power draw of just 1 milliwatt, that energy is exhausted in under one month. Even reducing consumption to 100 microwatts extends operating life only to eight or nine months, well short of the multi-year lifetimes expected for many deployed sensors. These limits define what “always on” has historically meant in practice.
At the core of the constraint is how conventional processors consume energy. Traditional microcontrollers draw power simply by remaining awake. Even when incoming signals contain no meaningful information, clock-driven systems continue executing instructions. For devices operating on small batteries, this background activity quickly becomes unsustainable. Intelligence, as a result, had to be rationed. Devices could either last long enough to be practical or remain continuously aware, but rarely both.
Wireless communication intensified the problem. Multiple industry measurements show that transmitting data can consume 10 to 100 times more energy than performing local computation on low-power devices. Radios must wake, synchronize with networks, and handle retransmissions in noisy environments. In large deployments, these transmission costs dominate power budgets, which is why early IoT architectures emphasized minimizing data movement even at the expense of responsiveness.
Two categories illustrate where this limitation has been most visible. In wearables, continuous monitoring could enable earlier detection of falls, irregular heart rhythms, or subtle health changes. Yet battery constraints force most devices to sample intermittently, prioritizing daily or multi-day battery life over constant awareness. In remote industrial and environmental sensors, continuous listening could detect small changes in vibration or sound that precede equipment failure. Instead, many deployments rely on scheduled measurements because battery replacement and field servicing often cost more over time than the hardware itself.
These compromises persisted not because better outcomes were unknown, but because deployment economics, standards, and tooling favored predictability over continuous intelligence. Cellular IoT technologies such as NB-IoT and LTE-M were explicitly designed around sparse communication, with industry targets of up to 10 years of battery life using a 5 Wh battery, assuming infrequent transmissions and aggressive power-saving modes. Network assumptions reinforced device architectures built around low duty cycles.
Until recently, improving intelligence almost always meant increasing power consumption. More awareness required larger batteries, more frequent charging, or wired power. Devices designed to last for years were forced to operate with limited situational understanding. This binary choice shaped the first generation of connected living and industrial IoT systems.
The significance of newer event-driven approaches lies in breaking this assumption. Continuous awareness no longer requires continuous energy expenditure. Devices can remain attentive without remaining fully active, setting the stage for a different class of systems – ones capable of integrating more deeply into everyday environments without imposing escalating maintenance or power costs.
How Event-Driven and Brain-Inspired Computing Marks a Structural Shift
The architectural shift now underway is best understood as a direct response to the constraints described above, rather than as a sudden leap in raw capability. As connected systems have scaled, the dominant limitation has moved from whether intelligence is possible to whether it can be sustained economically and operationally. Event-driven and brain-inspired computing emerge from this pressure, shaped by usage patterns rather than abstract performance goals.
Traditional IoT architectures are time-based. Sensors sample on schedules, processors execute continuously, and data is transmitted at regular intervals regardless of whether anything meaningful has changed. This model assumes that more data leads to better insight. In practice, it produces growing volumes of low-signal information. IDC projects that global data creation will reach 175 zettabytes by 2025, with IoT among the fastest-growing contributors. Yet multiple industry studies suggest that less than 10 percent of sensor-generated data is ever used for real-time decision-making, highlighting a widening gap between data volume and data value.
Event-driven computing reverses this logic. Instead of treating every measurement as equally important, systems remain largely inactive until signals cross meaningful thresholds. Computation becomes conditional. Power consumption and data generation scale with relevance rather than time. This aligns digital infrastructure more closely with physical environments, where long periods of stability are punctuated by relatively few events that require action.
Brain-inspired architectures implement this principle directly in hardware. By integrating sensing, memory, and computation, they minimize idle activity and reduce data movement across power-hungry pathways. Academic benchmarks show that for always-on tasks such as keyword spotting, presence detection, or vibration anomaly monitoring, event-driven designs can reduce energy consumption by an order of magnitude, and in some cases by 10 to 100 times, compared with conventional microcontroller-based approaches. The advantage lies not in peak performance, but in sustained attentiveness at extremely low power.
Commercial systems increasingly reflect this design shift. Innatera reports that its Pulsar microcontroller supports radar-based presence detection at approximately 600 microwatts and audio scene classification around 400 microwatts. For comparison, IEEE Spectrum notes that similar always-on applications implemented with conventional electronics often consume tens of milliwatts, implying power differences of 20× to well over 100×, depending on configuration. The pattern is consistent: an ultra-low-power intelligence layer monitors continuously, while higher-power processors and radios remain dormant until needed.
This change reshapes how edge devices interact with data platforms. In earlier models, endpoints generated raw measurements, edge gateways aggregated them, and cloud systems performed most interpretation. In event-driven architectures, endpoints take on first-level judgment, deciding what constitutes an event. Edge systems coordinate responses across local environments and manage latency-sensitive actions. Cloud platforms increasingly focus on fleet-level analytics, model training, compliance, and long-term optimization rather than continuous real-time inference.
The impact on data traffic is material. Network standards such as NB-IoT and LTE-M were designed around sparse communication to achieve targets such as up to 10 years of battery life with a 5 Wh battery, assuming infrequent transmissions and aggressive power-saving modes. Event-driven intelligence reinforces these assumptions instead of working against them. Devices transmit less often, but each transmission carries higher informational value.
This redistribution of intelligence also improves system behavior under real-world conditions. Decisions that once required round-trip communication can occur locally, reducing latency and dependence on connectivity. For applications such as safety monitoring, access control, health alerts, or predictive maintenance, immediate local response changes how systems are trusted and used.
The transition is not without friction. Integrating event-driven intelligence into existing platforms requires new tooling, validation processes, and operational assumptions. Data pipelines must be restructured around events rather than streams, and organizations must adapt to decision-making that begins at the endpoint. These challenges are real, but they do not negate the underlying shift.
Viewed in aggregate, this is a paradigm change rather than incremental growth. Intelligence is no longer scaled by adding bandwidth and compute. It is scaled by selectivity. Devices remain continuously aware without being continuously active. As connected systems move from novelty to infrastructure, this architectural logic becomes less an optimization than a prerequisite for sustainable scale.
From Constraint to Capability
Taken together, the developments across the preceding sections point to a clear inflection point in the evolution of connected systems. Always-on intelligence is no longer constrained primarily by power budgets, network economics, or architectural legacy. What was once a hard tradeoff between awareness and longevity is increasingly being resolved through selective, event-driven design.
The scale at which this shift matters is already substantial. IoT Analytics estimates 18.5 billion connected IoT devices in 2024, rising to 21.1 billion in 2025, with much of this growth concentrated in low-power endpoints embedded in homes, buildings, and infrastructure. In parallel, IDC projects global data creation reaching 175 zettabytes by 2025, while industry analyses consistently show that well under 10 percent of sensor-generated data is used for real-time decision-making. The gap between data produced and data acted upon has become a structural inefficiency rather than a temporary mismatch.
Roles Across the IoT Stack
| Layer | Primary Role | Typical Data Volume | Latency Sensitivity |
|---|---|---|---|
| Endpoint | First-level interpretation | Very low | Very high |
| Edge Gateway | Local coordination | Low to medium | High |
| Cloud Platform | Analytics and learning | Aggregated | Low |
Source: Ericsson; IoT Analytics
Event-driven edge intelligence directly addresses this imbalance. By reducing unnecessary data generation at the source, these architectures reshape how networks, edge gateways, and cloud platforms operate. Network standards such as NB-IoT and LTE-M already assume sparse communication to achieve targets of up to 10 years of battery life with a 5 Wh battery, according to ITU documentation. Event-driven intelligence aligns endpoint behavior with these assumptions, rather than forcing systems to compensate for constant streaming.
The economic implications compound as deployments scale. Industry research consistently shows that maintenance and operations account for a majority share of lifetime IoT costs, often exceeding initial hardware investment in large sensor fleets. Extending battery life from months to years, reducing transmission frequency, and lowering cloud ingestion volumes materially improves total cost of ownership. In cellular IoT alone, IoT Analytics reports 4.1 billion active connections in 2024, generating $18.4 billion in mobile-operator IoT revenue, underscoring how even small efficiency gains per device translate into system-wide impact.
The redistribution of intelligence across the stack also clarifies the evolving roles of edge and cloud platforms. Endpoints increasingly perform first-level interpretation. Edge systems coordinate locally and manage latency-sensitive responses. Cloud infrastructure focuses on fleet-level analytics, model updates, compliance, and long-term optimization. This is not a retreat from centralized systems, but a refinement driven by scale, cost discipline, and responsiveness.
Operational Cost Drivers in IoT Deployments
| Cost Driver | Impact on TCO | Sensitivity to Event-Driven Intelligence |
|---|---|---|
| Battery Replacement | High | Very high reduction |
| Data Transmission | Medium to high | High reduction |
| Cloud Processing | Medium | Moderate reduction |
| Field Maintenance | High | High reduction |
Source: McKinsey; IoT Analytics
The broader significance is evolutionary rather than dramatic. Connected living and industrial IoT are entering a phase where intelligence must be continuous, but effort cannot be. Architectures that rely on constant activity struggle to scale economically. Architectures built around selectivity align more closely with how physical environments behave.
In that sense, the transition now underway is less about making devices smarter in isolation than about making the entire connected stack more coherent. Intelligence moves closer to where signals originate. Data flows become intentional rather than exhaustive. Platforms evolve from repositories of raw measurements into engines of coordination and learning. As device counts continue to rise into the tens of billions, this shift is not simply advantageous. It is foundational.
Key Takeaways
-
Always-on intelligence has historically forced a tradeoff between continuous awareness and long device lifetimes.
-
Event-driven and brain-inspired computing break this tradeoff by activating computation only when meaningful signals occur.
-
Reducing data generation at the endpoint is becoming critical as IoT deployments scale into the tens of billions of devices.
-
Intelligence is being redistributed across the stack: endpoints interpret first, edge systems coordinate, and cloud platforms optimize at scale.
-
Longer battery life, lower data traffic, and reduced maintenance materially improve the economics of large IoT deployments.
Sources
- IoT Analytics; Number of Connected IoT Devices Growing to 21.1 Billion by 2025; – Link
- IoT Analytics; Cellular IoT Market Report 2024; – Link
- IDC (via Seagate); Data Age 2025 – The Digitization of the World; – Link
- McKinsey & Company; The Internet of Things: The Value of Digitizing the Physical World; – Link
- International Telecommunication Union (ITU); NB-IoT Networks Training Material; – Link
- Ericsson; Ericsson Mobility Report – IoT Connections Outlook; – Link
- IEEE Spectrum; Innatera’s Neuromorphic Chip Targets Always-On AI at the Edge; – Link
- Innatera; Innatera Unveils Pulsar, the World’s First Mass-Market Neuromorphic Microcontroller for the Sensor Edge; – Link
- Innatera; Redefining the Cutting Edge: Innatera Debuts Real-World Neuromorphic Edge AI at CES 2026; – Link
- Apple; Apple Watch Battery and Performance Information; – Link
- SKF; Wireless Vibration Sensor for Condition Monitoring – Product Documentation; – Link
- Texas Instruments; Low-Power Sub-1 GHz Wireless MCUs Technical Note; – Link
- DigiKey; Minimizing Power Consumption in Energy-Harvesting Wireless Sensors; – Link
- Xu et al.; Optimizing Event-Based Neural Networks on Digital Neuromorphic Processors; – Link
- Yao et al.; Spike-Based Dynamic Computing with Asynchronous Event-Driven Processing; – Link

