Developers building large language models (LLMs) under the IndiaAI Mission have been reportedly told to treat bias mitigation as a core requirement before their systems go live. Officials at the electronics and IT ministry said that given India’s social and cultural diversity, government-supported foundational models must be designed to avoid insensitive or discriminatory responses, especially when confronted with complex prompts.
The reminder comes as foundational models often inherit biases present in the data they are trained on. In the past, global companies have faced similar issues.