
Sushant Rabra, Partner and Head, Digital Strategy and Transformation, KPMG in India
By 2030, over 75 per cent of enterprise-generated data will be created and processed at the edge – not in centralised data centres. This shift is transforming how artificial intelligence (AI) thinks, learns and acts. If AI hosted on the cloud is the central brain, edge AI is a network of intelligent nerve endings – sensing, interpreting and responding in real time.
Edge AI is inevitable
As data from sensors, devices and machines multiplies, centralised AI architectures can’t always keep up with the speed, privacy and resiliency required. Edge AI processes data locally, on-device or near the source, to enable instant decisions, minimise latency and reduce dependency on connectivity. In sectors where every millisecond matters, such as autonomous driving, smart manufacturing or healthcare diagnostics, edge AI ensures that intelligence stays close to the action.
The building blocks: Federated and agentic AI
Edge AI isn’t just about location. It’s about architecture. Federated learning allows devices to train models locally and send only updates to the cloud, preserving privacy and reducing bandwidth. This model is already enabling personalised health insights from wearables and fraud detection at automated teller machines, all while keeping raw data secure. Combine that with privacy-preserving techniques such as differential privacy and homomorphic encryption, and suddenly, highly regulated fields such as healthcare and finance become viable for AI transformation.
Coming next are agentic AI systems that don’t just respond, but proactively act. Imagine electric vehicle charging stations that balance grid loads or factory robots that reroute tasks mid-process. These agents aren’t siloed; they interact across edge nodes, cloud platforms and enterprise systems. A retail shelf-scanning agent, for example, could sync with cloud analytics to automatically trigger restocking, blending autonomy with orchestration.
Challenges at the edge
Edge environments are notoriously fragmented, owing to limited compute, varied hardware and dynamic conditions. Running large models is often not feasible, so we rely on model compression, quantisation and distillation to shrink them without sacrificing accuracy. Some systems adopt a tiered architecture – lightweight models that can act locally, while escalating complex tasks to the cloud.
Standardisation is another hurdle. Diverse operating systems and protocols often slow down adoption. Open standards such as ONNX for microcontrollers offer promise, but interoperability remains a challenge. Real-time AI execution also demands new development pipelines. Traditional machine learning (ML) operations are too slow. AI in DevOps, a fusion of ML and software engineering practices, can potentially enable continuous model deployment at the edge.
Security is perhaps the thorniest issue. Distributed systems create more attack surfaces: adversarial inputs, data poisoning or model theft. We’re countering with secure boot mechanisms, hardware-based encryption and zero-trust architectures. Still, robust, end-to-end security must be embedded from design to deployment.
A glimpse ahead
Edge AI is already revolutionising industries. Manufacturing lines predict equipment failures before they occur. Remote clinics use portable diagnostic tools. Delivery drones navigate changing conditions in real time. These are no longer proofs of concepts – they are mainstream.
The future lies in orchestration: intelligent agents across edge and cloud coordinating like digital co-workers. Imagine a defect detection agent on a factory line autonomously notifying a cloud-based quality agent, which then signals procurement to adjust orders. We will likely see plug-and-play AI marketplaces for vision, natural language processing or analytics, tailored to run on tiny edge devices powered by specialised chips soon.
To get there, we need investment in infrastructure, talent and policy. Governments must build ethical frameworks, especially as edge AI touches sensitive public systems. Organisations must prioritise modular, context-aware models that adapt to sparse, localised data. And the ecosystem must rally around interoperability and governance.
Edge AI is no longer on the horizon. It is already reshaping how we live, work and decide. By distributing intelligence where it matters most, we unlock faster reactions, deeper personalisation and smarter systems. But realising its promise means solving for scale, security and seamless collaboration.
And that future is being built today, right here, at the edge.