Although still evolving, India’s data centre ecosystem is entering a decisive phase as the demands of artificial intelligence (AI), cloud and digital infrastructure reset benchmarks for power, speed and global competitiveness. In a panel discussion organised by tele.net, key stakeholders gathered to examine what it takes to build AI-ready data centres, spanning power density, liquid cooling, high-speed network fabrics, security and sustainability. Key takeaways from this discussion include the following perspectives…
AI has completely changed how data centres are designed and operated. The stable, incremental growth that once defined compute demand has given way to unpredictable surges and extreme density. Racks that once drew 5 kW now consume tens or even hundreds, forcing a complete re-evaluation of the infrastructure. Cooling, power, connectivity and operations, all of it must be rebuilt to support workloads that are heavier, faster and far less forgiving.
AI workloads differ from traditional computing in two key ways, namely density and variability. Modern graphics processing unit (GPU)-based clusters spike from idle to full load within seconds, stressing power and cooling. During model training, power demand can jump from idle to full capacity within seconds. The resulting heat output and power stress challenge every assumption of conventional design. At the same time, latency requirements have tightened dramatically. Applications like real-time analytics, fraud detection and autonomous systems rely on near-instant responses measured in milliseconds. This shift is pushing compute power closer to users, driving edge data-centre development, and changing how resources are distributed geographically.
Traditional incremental approaches to cooling, power and networking no longer suffice. AI demands denser power distribution, efficient thermal systems, robust connectivity and intelligent operational control, which translates into a complete architectural rethink.

Key challenges
To begin with, the first challenge lies in managing extreme density and variability. GPU clusters that drive AI workloads consume far more power than legacy racks and produce proportionally more heat. Traditional air systems such as underfloor distribution, fan walls or raised-floor air delivery simply cannot move heat fast enough once power exceeds a few tens of kW per rack. This leads to other issues of cooling inefficiency and reliability risk. Beyond a few tens of kW per rack, even the best air-cooling hits its thermal ceiling, forcing throttling or risk of failure. This inefficiency compounds cost, as operators burn more energy on cooling than on computation. The race to maintain an acceptable power usage effectiveness becomes harder with every added GPU.
Power infrastructure presents additional stress. Denser loads mean heavier racks and higher electrical currents. Raised floors and legacy busways are often inadequate. Facilities must be structurally reinforced, while redundancy plans must be rethought to avoid cascading failures. When each rack holds a disproportionate share of capacity, a single fault can have outsized impact.
Further, the networking layer introduces another constraint. AI clusters depend on fast, bidirectional data flow. However, networks built for high throughput with high latency – suitable for web or cloud storage – cannot support microsecond-level communication required by distributed model training. Legacy fibre layouts and connectors designed for conventional workloads are now bottlenecks.
Beyond physical constraints, there is a platform integration gap. Many AI initiatives begin as proofs of concept that deliver short-term results but stall when scaled. Additionally, AI’s massive power draw increases the urgency of sustainable operation. Operators must find ways to reduce energy use, recover waste heat and design for long-term efficiency without compromising performance.
Finally, migration complexity and skills gaps remain practical barriers. Most organisations cannot rebuild from scratch. They must retrofit legacy infrastructure while maintaining uptime. This means blending old and new systems – combining air with liquid cooling and legacy power systems with modular pods. Technicians trained for conventional systems now face steep learning curves in fluid dynamics, filtration and high-density fibre management.
Industry strategy and solutions
The most significant leap forward is in cooling technology. Air systems have reached their limits, and the industry is now turning to liquid-based approaches that deliver cooling directly to the source. Cold plates were the first breakthrough, circulating coolant across processor surfaces to increase heat transfer. Microchannel cooling advanced this further by routing liquid through microscopic channels near the chip surface, removing heat with exceptional precision.
Some operators are now exploring microjet and micro active cooling, which use targeted sprays of dielectric fluids to cool hotspots instantly. Others are adopting rear-door heat exchangers as retrofit-friendly options, integrating liquid loops into existing racks. For facilities starting fresh, immersion cooling, where servers are submerged in non-conductive fluids, offers the highest efficiency, capable of handling hundreds of kW per rack. However, liquid cooling demands strict operational discipline. Systems must remain spotless – even microscopic particles can block channels or cause leaks.
Parallel to cooling, power architecture is evolving toward modular “pods”. These are self-contained units that enable predictable expansion while isolating risk and supporting controlled scaling.
The networking foundation is also being reimagined to meet the demands of AI-scale workloads. In these environments, latency and reliability have become as critical as capacity. Operators are moving toward high-performance fibre systems that reduce on-site installation errors and enable faster roll-outs. Networks are now designed with built-in flexibility to support future upgrades in speed and bandwidth, ensuring that physical infrastructure can adapt as data requirements continue to grow.
Moreover, purpose-built edge data centres, typically 5-10 MW, are emerging as the next standard, combining dense power and liquid cooling with high-capacity interconnects to hyperscale cores. These smaller facilities combine dense power and liquid cooling with robust interconnects to hyperscale cores. Together, core and edge create a distributed, low-latency fabric optimised for both training and inference workloads.
At a systemic level, the solution lies in platform unification. The most successful AI-ready infrastructures integrate data, compute and networking into one orchestrated framework. Data must reside in the right place and in the right format, accessible to both on-premise and cloud resources. Hybrid models, where sensitive data stays local and scalable workloads move to the cloud, balance flexibility with compliance.
Composability, the ability to reallocate compute and storage seamlessly, has become the defining capability of modern infrastructure. Automation through AI-assisted building management systems now use sensor data to control temperature, detect faults and optimise power use in real time.
In addition, digital twins add a further layer of intelligence. By simulating networks, power paths and cooling systems, they enable predictive maintenance and faster fault resolution. Problems that once required hours of on-site diagnosis can now be isolated in minutes, with technicians dispatched precisely where needed.
Similarly, sustainability now drives design choices: advanced cooling, modular builds, heat recovery and efficient network provisioning cut energy use and waste, making “green” a financial necessity tied to return on investment. Legacy estates are shifting to a hybrid model. Upgrades centre on high fibre-count cabling, factory-terminated trunks and clean power paths. Many operators also deploy containerised or mobile data centres as temporary capacity, as they are faster to install, easer to relocate and ideal for pilot AI workloads during construction.
Finally, skills and governance complete the equation. Teams must master new technical domains like liquid cooling chemistry, microchannel maintenance and precision fibre diagnostics. Standardised governance frameworks for commissioning, cleanliness and monitoring align vendors and operators, preventing costly downtime. Proactive maintenance scheduling and vendor coordination turn reactive problem-solving into structured reliability.
Looking ahead
The AI era has transformed the data centre from a static facility into a dynamic, intelligent system. The core technologies, including liquid cooling, modular pods, high-density fibre, automation and digital twins, are already proven. What determines success now is execution: clean engineering, disciplined operations and seamless integration. Future-ready data centres will be judged not by their size but by their adaptability. Facilities treating AI as an architectural opportunity will set new standards for efficiency, reliability and sustainability. The most advanced designs will evolve continuously, balancing performance with responsibility and cost with creativity.
Building an AI-ready data centre is no longer about more power or space; it is about precision. Every layer must align toward sustaining intelligent workloads at scale, and those who master this integration will enable the AI revolution. S
Making this commitment at a time when trade relations between the US and India are strained is, in itself, a vote of confidence in India. Google also faces antitrust challenges in India and an Indian lawsuit challenging YouTube’s AI policy. Indian policymakers have also been promoting local alternatives to Google’s cloud, Gmail and Google Maps. By ramping up its presence here, Google could be calculating on staving off these nascent threats.
The new hub may create 188,000 jobs – Google currently has around 14,000 employees in India. The buildout also involves Bharti Airtel and the Adani Group. AdaniConneX, a joint venture (JV) between Adani Enterprises and EdgeConneX, will co-develop the core infrastructure and invest in new transmission lines, clean energy generation and energy storage systems.
Airtel will build a new cable landing station (CLS) in Visakhapatnam to host new international subsea cables connecting to Google’s global infrastructure. That will also host Meta’s Project Waterworth global subsea system. Google’s Blue-Raman cable, which connects to Mumbai, is also expected to be up and running by end 2025.
Airtel will, moreover, supply an intra-city and intercity fibre network that will increase the capacity of India’s digital backbone. The new hub will offer a full stack of AI solutions, including Google’s custom Tensor Processing Units (TPUs), to enable local AI processing. Google will also provide access to its AI models – including Gemini – and its platform for building agents and applications, as well as supporting services such as Search, YouTube, Gmail and Google Ads.
This is a cornerstone investment for the Andhra Pradesh government, which has very ambitious plans to develop 6 GW of data centre capacity by 2029. India has just about 1.5 GW of data centre capacity at present, so Andhra Pradesh is pushing the envelope. This hub itself will scale to multi-gigawatt by 2030.
Data centres need enormous power as well as space to house the computing, networking and cooling equipment required to collect, process, store and distribute data. The Andhra Pradesh government has offered subsidised land and subsidised power and water rates. Other states are also offering similar incentives. Maharashtra, for example, is hosting an $8.3 billion cloud facility for Amazon.
This is part of the digital infrastructure required to create a $1 trillion digital economy by FY 2028. Apart from a pool of skilled labour, India has cheap data rates and a large, growing population of data consumers. Moreover, India’s Sovereign Cloud Policy, which pushes for data localisation, and the Digital Personal Data Protection Act and its accompanying rules offer corporates comfort in terms of data safety and security.
Amazon, Google and Microsoft – the big guns in cloud services – are all locating facilities in India, along with Indian companies such as Reliance and Airtel. Vizag, Mumbai, Chennai, Pune, Hyderabad, Delhi NCR, Bengaluru and Kolkata are among the locations seeking investment.
Mega projects such as Google’s AI hub should accelerate India’s AI mission. Vizag will help India learn to develop and deploy cutting-edge AI. In September, NITI Aayog estimated that AI could contribute an extra $500 billion-$600 billion to the economy by 2035.
The Google AI hub will provide the critical foundation to drive growth and enable businesses, researchers and creators to build and scale with AI. This should help India set the pace for innovation, digital inclusion and economic growth.