India’s data centre industry is at an inflection point, with artificial intelligence (AI) fundamentally altering how facilities are designed, built and operated. Power densities are surging, cooling architectures are being rebuilt from the ground up, and the pressure to deliver capacity faster than ever before is exposing gaps in skills, supply chains and execution quality. At tele.net’s 9th Annual Conference on Data Centres in India, held in Mumbai in April 2026, Shashi Bhushan, Head – Design and Engineering, STT Global Data Centres India; Kunjumon Francis, Senior General Manager Sales – Data Centres, APAR Industries Limited; Purushothama Rao, MEP Head, Data Centres, Colliers India; Manoj Semwal, Director – Design Delivery, APAC, Equinix; and Hemant Sonawane, Head – Data Centre Planning and Deployment, AdaniConneX, shared their perspectives on what it takes to design and build data centres that are ready to meet the demands of an AI-driven world…


How have data centre design practices evolved, and how is AI changing infrastructure requirements?
Shashi Bhushan
The transformation over two decades in this industry has been considerable. Before 2000, there were no recognised standards, no TIA-942, no Uptime Institute guidelines, and no hot aisle or cold aisle configurations. Racks were placed front to back, and those facilities were more server rooms than true data centres. Post-2000, standards adoption picked up and co-location became a recognised model, though many enterprises were initially reluctant to place their IT equipment in shared facilities.
In that era, 3-5 kW per rack was standard, and anything above 6-8 kW was called high density. A data centre with 300 racks per floor and a couple of thousand racks in total was considered large. By 2015, the unit of measurement changed from the number of racks to megawatts of capacity. By 2020, with AI entering the market, rack densities of 50-130 kW became the reference point, and 130-140 kW is now the requirement for the GB300.
Air cooling is no longer an option at these densities. Liquid is a far superior medium for heat extraction, and moving to liquid-cooled infrastructure is now inevitable. Within a couple of years, densities of 1 MW per rack are possible. Structural requirements have also shifted dramatically. Floor loading that was designed for 1,500 kg per square metre under TIA-942 Tier III guidelines is now insufficient. Fully loaded racks can weigh up to 3,500 kg, requiring a floor loading capacity of 3,000 kg per square metre or more. Floor-to-ceiling height has also increased from 4.5 metres to 7.5 metres. As such, infrastructure is still trying to catch up with IT.
Kunjumon Francis
The design challenge today has several dimensions. In terms of power, the traditional model of fixing a location first and then arranging power supply is being reversed. Operators now look for where power is already available and build there. This changes not just logistics but also the entire planning process. Transmitting power from the generating source to a remote data centre site introduces considerable complexity, and integrating renewable energy into this adds another layer, since green energy sources are often located far from demand centres and must be managed dynamically within the grid.
For cooling, rack heights are increasing from 42U to 47U, and NVIDIA’s guidelines call for at least 6 metres of floor-to-ceiling height to support liquid cooling deployments. Once load concentration increases, the floor’s load-bearing capacity in terms of civil design must be recalculated. Fire safety design must also be revisited when cooling configurations change.
Similarly, cable selection is a growing concern. With AI accelerators like the GB200 and GB300, significantly higher current flows are required. Cables must be planned well in advance since they cannot easily be replaced. The cable’s insulation behaviour at higher temperatures must be assessed, as must whether a better cable specification can reduce cable tray size and, in turn, reduce structural ceiling load requirements.
Networking requirements have also escalated. AI data centres need at least three times the network speed of conventional facilities. Finally, geographical positioning has taken on new importance. The Iran conflict demonstrated that even geographically separated data centres and disaster recovery sites can be taken offline simultaneously. Ring topology across multiple sites with automatic failover is now a serious design consideration.
Purushothama Rao
The changes in physical specifications alone tell the story. Raised floor plenum depth has grown from 300-450 millimetres to 1.2 metres. Floor-to-ceiling height has risen from 3 metres to 6.5-7 metres. Cooling loads per rack have gone from 3 kW to 40 kW, and with AI workloads, a single rack can now draw up to 120 kW or even approach 1 MW. Power levels that once peaked at 7 MW per facility are now going to 200 MW. Disaster recovery, once placed within the same country, is increasingly being deployed in another country entirely.
Further, liquid cooling is now the primary response to extreme density, with several approaches in use. AI tools and predictive maintenance methods are being used to anticipate risks and manage infrastructure proactively. Bandwidth requirements, which were once modest, are now reaching 700 Gbps in AI environments.
Manoj Semwal
The shift in design over the past few years has been significant across every layer of infrastructure. On the electrical side, rack densities that once ranged from 6kW to 10 kW per rack are now moving to 30-100 kW and beyond. This has forced a parallel transformation in cooling. Air-side units such as computer room air-conditioning (cracs/crahs) have reached their thermal limits and can no longer serve these load levels alone. The industry is now moving to liquid-based cooling solutions to keep pace with AI and high-performance computing requirements.
At the same time, the design philosophy itself is changing. The industry is shifting from bespoke, project-by-project solutions to standardised, scalable and modular products. Data centres are being built with fewer floors than before, because placing air-cooled chillers on rooftops has physical constraints that become unworkable as capacity scales. Prefabricated and factory-fitted mechanical, electrical and plumbing (MEP) infrastructure is gaining ground, where entire sections, including pipework and cable trays, are assembled off-site and simply integrated on arrival at site.
On the operations side, AI-based software is enabling better prediction of how infrastructure will behave under varying loads. Digital twins, which create virtual replicas of a facility, allow operators to simulate performance and identify problems before they occur. This is a significant step forward in how data centres are managed.
Hemant Sonawane
AI is acting as a disruptor from a data centre design perspective. One significant change relates to availability standards. The 99.999 per cent uptime target may give way to 99.99 per cent, because AI workloads can tolerate brief interruptions by failing over to another availability zone. As a result, uninterruptible power supply (UPS) systems are already being removed from AI data centre designs in many cases, and generators may follow in very large campuses where the number of units required becomes unmanageable.
A less discussed but significant change is fibre density within campuses. In a 200-300 MW or gigawatt-scale campus, the number of conduits required underground to interconnect buildings could be 20-30 times what is being deployed today. This is a major infrastructure shift that requires early planning.
What are the biggest execution challenges at scale?
Shashi Bhushan
India’s data centre ecosystem, including design consultants and general contractors, is currently operating at roughly 50-60 per cent of the quality level seen in more mature markets. The precision of design documentation and the quality of execution on-site have some distance to cover before reaching the standard expected in a global context. This will improve, but it is realistic to expect it will take another three to four years for the ecosystem to reach that level of maturity.
Kunjumon Francis
India is no longer following designs proven elsewhere. Hyperscalers are using India as a testing ground for automation and other approaches not yet implemented in other markets. This means that learning and implementation are happening simultaneously, which adds to execution complexity.
Timelines are compressing from 36 months to 24 months to 11 months to 8 months. MEP contractors must be brought into the project much earlier than before, so they can plan manpower, coordinate on-site activities and manage material procurement. Cables, conduits and cable trays must be planned well in advance, or quality suffers. When MEP teams are engaged late, work is assigned to whoever is available rather than trained personnel, and the resulting quality problems affect performance. Commissioning phases from Level 1 to Level 5 take time and must be planned so that MEP is completed before testing begins.
Purushothama Rao
From a project management perspective, execution begins to improve the moment a project management consultant (PMC) is engaged early, immediately after the request for proposal is issued by stakeholders. The PMC can then coordinate between the design team and vendors, build a realistic schedule and structure parallel workstreams to compress timelines. Ongoing quality control, value engineering and cost management must run continuously through the project, not as end-stage reviews.
Risk assessment and mitigation planning must be defined clearly before work begins. Stakeholder communication has improved considerably, with dedicated communication channels now being established at the start of projects rather than improvised midway. For large modular structures, testing at the original equipment manufacturer (OEM) level before components arrive on-site is one of the most effective ways to cut construction time and ensure a clean commissioning process.
Manoj Semwal
Skilled manpower remains the most critical gap. The quality of construction work in India and in a mature market like Singapore is markedly different, even though the workforce in Singapore is largely made up of Indian professionals working under a different quality discipline. This points to a systemic issue around standards and accountability on-site rather than a fundamental capability deficit. The industry is improving, but the gap is visible. Power availability is also becoming a constraint that will affect construction timelines, not just operations. As data centres move towards gigawatt-scale campuses, power at that magnitude is already beginning to tighten. This will be one of the most prominent execution challenges in the near term.
Sustainability governance is the third area to watch. Currently, there are no formal compliance requirements in India around sustainability for data centre construction or design. That will change, and the industry needs to begin preparing for it now rather than treating it as a future consideration.
Hemant Sonawane
Speed is the first and most pressing challenge. Delivery timelines have compressed from 24-30 months a few years ago to 12-18 months today. Customers now expect full capacity in a single handover rather than in phases. To meet this, operators must secure powered land from day one, meaning land with a credible pathway to 500 MW, 700 MW or 1 GW of power already identified. Master planning for power must happen at the land selection stage. Without it, delivery within 18 months is not achievable.
Standardised reference designs that can be replicated across modules within a campus are essential. The approach of building warm shells first and completing the MEP fit-out upon customer order is one way of compressing delivery. India also lacks the skilled workforce needed to execute and operate at this scale. From roughly 1.5 GW of capacity today, the industry is expected to reach around 10 GW by 2030, a six to seven times increase. Prefabrication is key to managing this. Components must be built and tested off-site, then integrated and commissioned on-site.
Are there any industry or government initiatives to address the skills gap?
Shashi Bhushan
STT Global Data Centres runs training centres in Pune and Bengaluru. The primary need for skilled personnel in data centres is in operations. Diploma engineers and ITI-qualified candidates are trained through these programmes and receive certification on completion. They are not obligated to join STT and can take their certification to any employer. This is an effort driven by a sense of social responsibility towards the wider industry.
Kunjumon Francis
Several OEMs run periodic product and operations training for their specific systems. At a broader level, proposals have been made to the Data Centre Council and to event organisers to take up with the government the possibility of embedding relevant syllabi into engineering college curriculums. One OEM has already worked with a university in north India to introduce a building management system engineer curriculum, with the company’s own experts serving as visiting faculty. These are early but meaningful steps towards creating an industry-aligned talent pipeline at scale.
Hemant Sonawane
AdaniConneX is initiating a similar programme, bringing in fresh diploma and degree holders or candidates with limited experience for six to eight months of structured training. At the end of the programme, participants are either absorbed into AdaniConneX or released to the broader industry. The initiative is designed to contribute to the talent pipeline rather than serve only internal requirements.
What are your thoughts on future trends in design and construction?
Shashi Bhushan
Campus development has become the norm rather than the exception. For large campuses, moving from 220 kV substations to 33 kV distribution within the campus and positioning high-efficiency transformers as close to the load as possible reduces both losses and reliability risks. Decoupled downstream infrastructure at the floor level is the most important design principle for future-proofing. The ability to swap fan wall units for coolant distribution units, or to modify power distribution at the floor level without touching the high-side infrastructure, is what makes a facility genuinely adaptable as technology changes.
Magnetic levitation chillers, which are 20-25 per cent more efficient than conventional screw chillers, offer a strong return on investment at the operating temperatures that AI-dense facilities require. High voltage direct current distribution at 800 V, with prototypes already built and commercial deployment expected within the next year, eliminates UPS systems for AI loads by extending utility supply directly to the equipment. Further increases to 1,300 V are expected within a few years. Each conversion eliminated improves efficiency. Infrastructure transformation is far from complete.
Kunjumon Francis
Innovation is occurring simultaneously across power, cooling and speed. On the power side, pumped storage systems are emerging to store and deploy renewable energy, and some global operators have moved towards nuclear power as a baseload source. On cooling, fully liquid immersion cooling has not yet been widely deployed but is coming, along with significant development in coolant chemistry and technology. On the speed of construction, standard modular designs of 1.8-2 MW are enabling faster replication. Digital twins applied from the earliest stage of a project, with data accessible to all ecosystem partners, testing agencies and operations teams from the start, offer the most effective path to a faster, cleaner project handover process.
Purushothama Rao
Cooling will continue to evolve as AI pushes rack loads from 30-40 kW today towards 100 kW and beyond. Liquid cooling offers 3,000 times greater heat transfer efficiency than air, and its adoption will accelerate. Direct liquid cooling using non-water dielectric fluids can also help reduce overall water consumption. Three-dimensional simulation and digital twins used before construction can identify cooling hotspots, allowing rack layout adjustments before physical installation rather than after.
Manoj Semwal
The direction is clearly towards higher density and fully productised, standardised solutions rather than custom designs for each project. One trend the industry has not yet fully reckoned with is quantum computing. AI disrupted the market rapidly over the past two years. Quantum computing has the potential to create a significantly larger disruption to infrastructure requirements than AI has done so far.
Hemant Sonawane
AI-driven design and construction monitoring will be central to future efficiency gains. Design cycles that currently take three months could be reduced to two weeks using AI tools. Construction drawings for general contractors will increasingly be generated by AI, with four-dimensional and five-dimensional models, incorporating time and cost alongside the three-dimensional geometry, driving the roll-out process. Creating a digital twin from day one and handing it over to operations teams at the end of the project represents the most integrated and effective approach to data centre delivery that the industry can move towards.