The Indian Institute of Technology (IIT) Madras’  Centre for Responsible AI (CeRAI) has announced that it is partnering with Ericsson for joint research in the area of responsible artificial intelligence (AI). To commemorate the occasion, a symposium on responsible AI for networks of the future was organised where leaders from Ericsson Research and IIT Madras participated to discuss the developments and advancements in the field of Responsible AI.

During the event held at the IIT Madras campus, Ericsson signed a memorandum of understanding (MoU) to partner with CeRAI as a platinum consortium member for five years. Under this MoU, Ericsson Research will support and participate in all research activities at CeRAI.

The CeRAI is an interdisciplinary research centre that envisions becoming a premier research centre for both fundamental and applied research in responsible AI with immediate impact in deploying AI systems in the Indian ecosystem. AI Research is of high importance to Ericsson as the 6G networks would be autonomously driven by AI algorithms.

Commenting on the partnership, Prof. Manu Santhanam, dean, Industrial Consultancy and Sponsored Research, IIT Madras, said, “Research on AI will produce the tools for operating tomorrow’s businesses. IIT Madras strongly believes in impactful translational work in collaboration with the industry, and we are very happy to collaborate with Ericsson to do cutting edge R&D in this subject.”

Meanwhile, Dr. Magnus Frodigh, global head, Ericsson Research, said, “6G and future networks aim to seamlessly blend the physical and digital worlds, enabling immersive augmented reality/virtual reality (AR/VR) experiences. While AI-controlled sensors connect humans and machines, responsible AI practices are essential to ensure trust, fairness, and privacy compliance. Our focus is on developing cutting-edge methods to enhance trust and explainability in AI algorithms for the public good. Our partnership with CeRAI at IIT Madras is aligned with Indian government’s vision for the Bharat 6G program.”.

A panel discussion on ‘Responsible AI for Networks of the Future’ was organised to commemorate the partnership during the symposium and some of the current research activities being carried out at the Center for Responsible AI were showcased.

Further, Prof. B. Ravindran, faculty head, CeRAI, IIT Madras, and Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras, said, “Networks of the future will enable easier access to high performing AI systems. It is imperative that we embed responsible AI principles from the very beginning in such systems. Ericsson, being a leader in future networks is an ideal partner for CeRAI to drive the research and for facilitating adoption of responsible design of AI systems.”

Additionally, Prof. B. Ravindran said, “With the advent of 5G and 6G networks, many critical applications are likely to be deployed on devices such as mobile phones. This requires new research to ensure that AI models and their predictions are explainable and to provide performance guarantees appropriate to the applications they are deployed in.”

The speakers and panellists of the symposium include Prof. R. David Koilpillai, Qualcomm Institute chair professor, IIT Madras, Dr. Harish Guruprasad, core Member, CeRAI, IIT Madras, Dr. Arun Rajkumar, core member – CeRAI, Dr. Jorgen Gustafsson, head of AI , Ericsson Research,  Dr. Catrin Granbom, head of Cloud Systems and Platforms, Ericsson Research,  Kaushik Dey, research leader, AI/ML, Ericsson Research – India.

Some of the key projects presented during this symposium include:

  • The project on large-language models (LLMs) in healthcare, which focuses on detecting biases shown by the models, scoring methods for real-world applicability of a model, and reducing biases in LLMs. Custom-scoring methods are being designed based on risk management framework (RMF) put forth by National Institute of Standards and Technology (NIST), the US federal agency for advancing measurement science and standards.
  • The project on participatory AI addresses the black-box nature of AI at various stages, including pre-development, design, development and training, deployment, post-deployment and audit. Taking inspiration from domains such as town planning and forest rights, the project studies governance mechanisms that enable stakeholders to provide constructive inputs for better customisation of AI, improve accuracy and reliability, raise objections over potential negative impacts.
  • Generative AI models based on attention mechanisms have recently gained significant interest for their exceptional performance in various tasks such as machine translation, image summarisation, text generation, and healthcare, but they are complex and difficult for users to interpret. The project on interpretability of attention-based models explores the conditions under which these models are accurate but fail to be interpretable, algorithms which can improve the interpretability of such models, and understanding which patterns in the data these models tend to learn.
  • Multi agent reinforcement learning (MARL) for trade-off and conflict resolution in intent based networks: Intent-based management is gaining traction in telecom networks due to strict performance demands. Existing approaches often use traditional methods, treating each closed loop independently and lacking scalability. This project studies a MARL method to handle complex coordination and encouraging loops to cooperate automatically when intents conflict. Current efforts explore generalisation abilities of the model by leveraging explainability and causality for joint actions of agents.