In days, global CEOs and tech elites will join tens of thousands of attendees at the AI Impact Summit in New Delhi. Investors, tech firms and governments increasingly see India as the next AI centre: a place to test scale, to accelerate talent development and to anchor the future of AI in the Global South. But India’s leadership will not be measured solely by the business it attracts or celebrities on panels – it will be judged by whether it is willing to set rules as well as the stage.
Investors do not only want innovative approaches to growth, but also assurances of responsibility embedded into the governance of new technology. This is especially true in markets that are historically vulnerable to the costs of unchecked technology deployment.
If the global south is to shape the future of AI, it must also lead in establishing respect and accountability for users’ rights from the outset. India’s opportunity is to show that responsible AI development is not a brake on growth, rather a competitive advantage, and companies doing business in India should expect a regulatory model that embraces both objectives. The Indian government’s own seven principles for responsible AI recommend a rigorous independent oversight model as the most desirable form of regulation. This is important because the way a giant like India defines its approach to the technology will influence the industry and regulation outside its own borders.
The Oversight Board, where I am a Board Member, is the only such mechanism offering a check in respect of human rights on Big Tech’s decisions. It addresses, independently, Meta’s most contentious content issues using a consistent framework based on human rights principles. It provides redress for users and promotes accountability and transparency, which is good for businesses that also use Meta’s platforms to advertise or for investors who want to invest responsibly.
The Board has defended free expression and judged when limits on it are permissible to stop real-world harms, including violence. It has helped shape and refine Meta’s policies and processes – independent of government interference and commercial interests – for instance, making rules available in more languages and ensuring users know what standard they have broken before receiving a strike on their account.
Such independent oversight can be applied to AI governance. We have already ruled on AI-generated posts and automated content moderation. In a case on manipulated imagery of then US president Joe Biden, the Board made clear that AI-generation or modification of imagery is not itself grounds for removal, as that would limit free speech. But labelling content as fake can mitigate harm by informing people that what they are seeing is not real.
And when necessary to prevent harm, we have said content must be removed from Meta’s platforms, for example, in a case of explicit AI images resembling female public figures from India and the United States violating the women’s rights to privacy and protection from mental and physical harm. While much more still needs to be done, Meta credits our work here as the initiator for its AI-labelling program and says it consequently labeled 100s of millions of AI-generated or manipulated content in a single one-month period.
We have urged Meta to identify and treat equally AI-generated posts across formats – audio, still images and video – including amid situations like elections and financial scams, where fraudulent content can have outsized impacts. Right now, we are debating how Meta should deal with the most consequential issues of the day, including what to do about AI content inflaming wars and aiming to monetise interest in them, including the recent Israel-Iran conflict.
Social media and LLMs are not like-for-like. The former deals mainly with user posts; the latter synthesizes information from multiple sources for users. But users of both need to be able to object and receive redress for content they believe is hateful or harmful, or if their access to information is denied.
Social media and LLMs are also both global. They need policies with local political and cultural understanding, while having consistency across borders. The number of languages in India alone will necessitate tailored approaches to LLM development and governance that is inclusive of local contexts.
Independent oversight helps ensure those local voices are represented in governance – our Board has more than a dozen nationalities on it. This has meant, for instance, when moderators have excised words from platforms that have been misinterpreted, we have had them restored.
In the fog of AI development and rollout, independent ethical decision-making bodies can drive companies to make better, rights-respecting decisions. To date, Meta is the only firm that has submitted its platforms to such meaningful independent oversight and public scrutiny, for which it deserves credit. Other companies, which for now are relying on advisory bodies, should have similar oversight, whether from our organisation or another. We have many hard-fought learnings to share that can help AI companies grapple with these challenges without starting afresh.
One glaring contrast between social media and LLMs is in the content policies that manage them. Social media content policies take up masses of webpages. LLM’s content policies are scant. Meta AI’s published “user policy” is just over three pages.
OpenAI’s guidelines barely breach 1,000 words. Anthropic recently launched Claude’s updated “constitution”, which is supposed to establish its values and behaviour, but which provides no mechanism for outside enforcement.
While it might be argued that AI companies do not want moderation, Meta, for instance, has already banned its LLM’s carrying out “impersonation” and “disinformation” – so it is happening. Inevitably, there will be regulation of what is permissible on AI platforms, as on social media. AI companies can act now or will likely be legislated into change. At this pivotal moment for Indian and global AI companies’ development, independent oversight can help them grow in a way that is responsible to local users and satisfies markets around the world, whether the EU, US or elsewhere. That is also reassuring for investors pushing for India to be the next AI super hub.
(Disclaimer: These are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)





