IIT-Madras's CeRAI and Ericsson Join Forces to Advance Responsible AI in 5G and 6G Era
IIT-Madras's CeRAI and Ericsson Join Forces to Advance Responsible AI in 5G and 6G Era
As per the MoU, Ericsson Research will actively support and engage in all research endeavours undertaken by CeRAI. The partnership is expected to make significant contributions to the development of ethical and responsible AI practices in the evolving technological landscape in the country

The IIT-Madras’s Centre for Responsible AI (CeRAI) and Ericsson have entered into a partnership aimed at advancing the field of Responsible AI. The collaboration was officially announced during a symposium on ‘Responsible AI for Networks of the Future’ held at the premier institutes campus on Monday.

The highlight of the event was the signing of an agreement by Ericsson, designating CeRAI as a ‘Platinum Consortium Member’ for a five-year term. Under this Memorandum of Understanding (MoU), Ericsson Research will actively support and engage in all research endeavours undertaken by CeRAI.

The Centre for Responsible AI at IIT-Madras is recognised as an interdisciplinary research hub with the vision to become a premier institution for both fundamental and applied research in Responsible AI. Its immediate goal is to deploy AI systems within the Indian ecosystem while ensuring ethical and responsible AI practices.

This partnership between CeRAI and Ericsson is expected to make significant contributions to the development of ethical and responsible AI practices in the evolving technological landscape in the country.

What is Responsible AI

Responsible AI is an approach to developing and deploying AI systems in a safe, trustworthy, and ethical way. It is not just about following a set of rules or guidelines, rather about having a thoughtful and intentional approach to AI development and deployment. It is about considering the potential risks and benefits of AI and making sure that AI systems are used in a way that is fair, equitable, and beneficial to all.

AI research has been gaining paramount importance in recent years, especially in the context of the forthcoming 6G networks that will be driven by AI algorithms. Dr Magnus Frodigh, Global Head of Ericsson Research, highlighted the significance of responsible AI in the development of 6G networks. He emphasized that while AI-controlled sensors will connect humans and machines, responsible AI practices are essential to ensure trust, fairness, and privacy compliance.

Addressing the symposium, Prof Manu Santhanam, Dean of Industrial Consultancy and Sponsored Research at IIT-Madras, expressed optimism about the collaboration, stating that research in AI will shape the tools for operating businesses in the future. He emphasised on IIT-Madras’s commitment to impactful translational work in collaboration with the industry.

Prof B Ravindran, Faculty Head at CeRAI, IIT-Madras, and Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT-Madras, elaborated on the partnership, stating that the networks of the future will facilitate easier access to high-performing AI systems.

Prof Ravindran stressed on the importance of embedding responsible AI principles from the outset in such systems. He also highlighted that with the advent of 5G and 6G networks, new research is required to ensure that AI models are explainable and can provide performance guarantees suitable for various applications.

Some of the projects by the institution showcased during the event included:

  • Large-Language Models (LLMs) in Healthcare: This project focuses on detecting biases exhibited by large language models, developing scoring methods to assess their real-world applicability, and reducing biases in LLMs. Custom scoring methods are being designed based on the Risk Management Framework (RMF) proposed by the National Institute of Standards and Technology (NIST).
  • Participatory AI: This project addresses the black-box nature of AI at various stages of its lifecycle, from pre-development to post-deployment and audit. Drawing inspiration from domains like town planning and forest rights, the project explores governance mechanisms that enable stakeholders to provide constructive inputs, thereby enhancing AI customization, accuracy, and reliability while addressing potential negative impacts.
  • Generative AI Models Based on Attention Mechanisms: Generative AI models based on attention mechanisms have gained attention for their exceptional performance in various tasks. However, these models are often complex and challenging to interpret. This project focuses on improving the interpretability of attention-based models, understanding their limitations, and identifying patterns they tend to learn from data.
  • Multi-Agent Reinforcement Learning for Trade-off and Conflict Resolution in Intent-Based Networks: With the growing importance of intent-based management in telecom networks, this project explores a Multi-Agent Reinforcement Learning (MARL) approach to handle complex coordination and conflicts among network intents. It aims to leverage explainability and causality for joint actions of network agents.

What's your reaction?

Comments

https://hapka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!