
Girish Dhavale, Senior Vice President, Nxtra by Airtel
India’s data centre industry is growing at a pace that few sectors can match. From the first data centre launched by VSNL in Chennai in March 1986 through to December 2023, total utilisation across the country has accumulated to 687 MW. In 2024 alone, 400 MW was added. In 2025, that figure rose to 670 MW. By 2030, the industry is expected to reach 3 GW of capacity. The CAGR for the sector is at 24 to 30 per cent, and the number of co-location operators has grown from five to 21. India is no longer simply a domestic market. It is becoming a disaster recovery and database hub for organisations across Europe and the US.
Three factors are driving this. First, connectivity: undersea cables landing in Mumbai, Chennai, Kochi and now Visakhapatnam give these locations a direct gateway to global networks, making them natural points of concentration for data centre investment. Second, talent: India has a deep pool of engineers who understand data centres from both the hardware and software perspectives, available at competitive costs. Third, power: subsidised and increasingly green power, combined with dense 400-800 GB metro fibre networks and 5G connectivity, completes the infrastructure proposition.
Understanding AI load
Artificial intelligence (AI) workloads are distinct from conventional data centre demand and must be understood in two parts: training and inferencing. Training involves processing enormous volumes of data to generate outputs, whether that is a platform like ChatGPT responding to queries, an insurance system evaluating risk or a matchmaking algorithm ranking profiles. The compute intensity during training is extremely high, with graphic processing unit (GPU) utilisation running at 10-30 per cent of the load. Inferencing, which is the application of trained models in real time, accounts for 70-90 per cent of compute activity and is where latency becomes the critical variable.
Every smart car on the road today transmits approximately 2,000 data points per minute to a data centre. Legal systems are being digitised. Smart cities are being planned across 156 urban centres. AI is moving into daily operations across every sector, and the infrastructure requirements are substantial.
Changes in physical infrastructure
The shift in load density is the most consequential change. Rack density has moved from 3 kW per rack historically to 80 kW today with conventional cooling, and up to 130 kW with liquid immersion cooling. A single GB200 chip now carries 14 kW of load. With 6 to 12 GPUs per rack, air cooling is no longer adequate. The industry has moved to direct-to-chip cooling, where cooling pads are installed directly on processors and coolant is circulated at the chip level to extract heat. At the rack level, chilled water pipelines are now brought directly to the server row, with hybrid cooling combining air for low-density loads with chilled water for high-density ones. At the facility level, this means a primary chilled water loop, an IT-side secondary loop and peripheral cooling distributed across the room.
Civil requirements have also changed. Floor load capacity has increased from 1,000-1,500 kg per square metre to 3,000 kg per square metre, driven by the weight of high-density liquid cooling infrastructure.
The manufacturing opportunity in India
The components enabling this cooling transformation – coolant distribution units, server-level cooling pads, rear-door heat exchangers and internet of things flow sensors – are almost entirely imported. Lead times for these products currently run at 4-6 months. Geopolitical disruptions are adding further delays. The metallurgy involved, SS304 and SS316 grade piping with passivation processes, is not beyond Indian manufacturing capability. The opportunity to develop these solutions domestically is real, the demand visibility over the next 5-7 years is clear and the forex outflow from continued import dependency is unnecessary.
India is deploying liquid cooling infrastructure in parallel with the US and Europe, not after them. The knowledge is being generated here. What is needed now is for that knowledge to translate into domestic product development, so that Indian manufacturers and service providers are supplying reliability solutions for AI infrastructure, not just consuming them.
Any leakage or flow disruption in a liquid cooling system can trigger a cascading failure across the entire IT load it serves. For systems supporting stock exchanges, financial platforms or public services, the consequences of that disruption are measured in billions. The reliability imperative is as strong as the growth opportunity, and both point in the same direction: India needs to build this capability itself.