Amid a long-running tussle with tech platforms that has only intensified in the generative AI era, Indian news publishers are pushing back against the use of journalistic content as free raw material to train AI systems. On the opening day of the AI Impact Summit 2026 in New Delhi, Monday (February 16), a panel featuring leaders from the media and publishing ecosystem in India, made clear that journalistic content used to train AI models needs to be paid for. They also sought to distinguish news content from internet data, stating that professionally reported content is critical to improving model accuracy and preventing hallucinations.

โ€œJournalistic content is not free-floating content on the internet. It is something which is intellectual property. It gets created with investment, infrastructure, and talent.

That data has to be contracted. It cannot be surrendered,โ€ LV Navaneeth, the CEO of The Hindu group, said. Other speakers included Kalli Purie, Executive Editor-in-Chief, India Today Group; Mohit Jain, COO, Bennett, Coleman & Co Ltd; Pawan Agarwal, Deputy Managing Director, Dainik Bhaskar Group, Robert Whitehead, International News Media Association (INMA) lead; and Tanmay Maheshwari, Managing Director, Amar Ujala Publications in a panel discussion moderated by Ashish Pherwani from Ernst and Young (EY).

The call for AI companies to fairly compensate publishers comes amid growing skepticism of news publishers in several jurisdictions, including in the United States and India, over concerns of copyrighted material, such as news reports, being used by companies like OpenAI for training their foundational models, without permission or payment. This has led to court cases, including in India, where publishers โ€” members of the Digital News Publishers Association (DNPA), including The Indian Express, among others โ€” have mounted a legal challenge against OpenAI over the โ€œunlawful utilisation of copyrighted materialโ€.

During the panel discussion organised by the DNPA, speakers also examined the shifting value of news in the AI era, how publishers are deploying AI tools inside newsrooms, and whether the technology can help unlock new revenue streams rather than erode existing ones. The impact of AI on publishers Rather than diminishing the value of journalism, The Times Groupโ€™s Mohit Jain argued that AI could elevate the premium on credibility and accountability.

Story continues below this ad โ€œIndia is a vibrant country that is diverse and complex. And in such an environment, editorial discretion, verification, and institutional memory is not optional, it is foundational,โ€ he said.

โ€œThe press is not just something which produces information, it curates trust, provides context, and accepts the moral and the legal responsibility for what it publishes, and that layer of accountability is the differentiator, and when AI begins to commoditise information, the trust will become scarce, and that scarcity will create value,โ€ Jain added. Also Read | AI Impact Summit begins in New Delhi today: How India plans to shape the AI conversation However, the INMAโ€™s Whitehead struck a more sombre note, warning that AI chatbots were already eroding referral traffic from search engines to publishers and threatening a core pillar of their business models. โ€œHow the heck are we funding journalism? AI is already destroying the value of the companies here on the stage,โ€ he said, adding that referral traffic to publishers from search engines and social media networks has seen โ€œhuge fallsโ€ in the past 12 months following the wider rollout of AI Mode and AI Overviews in Google Search.

Common use cases of AI in newsrooms On the use of AI in newsrooms, publishers dismissed the idea that it is a substitute for journalists and pointed to a โ€˜human moatโ€™ as a structural necessity to sustain public discourse. India Todayโ€™s Kalli Purie said that the news organisation has adopted an โ€˜AI sandwichโ€™ guiding principle โ€œwhere human intent starts the AI exercise.

You have AI in between to help you with something, and then you have the final decision taken by a human. โ€ Story continues below this ad Drawing parallels to the concomitant benefits of nuclear power, Navneeth said The Hindu uses AI to complement a humanโ€™s work and help readers go deeper into an article. On using AI to increase revenues, the media executive said that AI can be used to increase engagement and retention time.

He further revealed that the news daily has developed an in-house AI model that is reportedly less likely to hallucinate as it is trained on The Hinduโ€™s own archival material. However, Amar Ujalaโ€™s Tanmay Maheshwari highlighted the technical limitations of AI in multilingual news production, noting that the accuracy of most Indic-language AI models is less than 55 per cent. Responsibility for wrongful, AI-hallucinated content When asked where accountability should lie for wrongful content, publishers argued that AI companies, social media platforms, and even independent content creators should be held to the same legal and ethical standards as legacy news brands.

โ€œIf legacy media is responsible for the content we put out, our editor is held to very high standards. Platforms should be held to the same high standard,โ€ said Navneeth.

Purie also called for an end to โ€œthe asymmetry of reward and punishment between legacy media and social media. โ€ โ€œLegacy media has to follow certain guidelines. We see those same guidelines flouted on social media on an everyday basis, and we tolerate that,โ€ she added.

Story continues below this ad Outcomes from AI Summit: What publishers want Putting forward a nine-point agenda, Purie called for transparency from companies using training data scraped from publishersโ€™ websites to build AI models. Several speakers also supported clearer structuring and labelling of content to ensure traceability, so that AI-generated content can be more reliably attributed to original sources. Also Read | Microsoft pilots new content marketplace for AI training: What it means for publishers They also called for the recognition of journalism as a public good, and for tech companies to improve their algorithms in order to reward stories that deliver social impact as opposed to being virality based.

โ€œPut a real value to verified content provided by proper institutions, and penalise AI hallucinations severely,โ€ Purie said. Meanwhile, Whitehead suggested that governments should pass a law in order to ensure paid training of AI models on journalistic content. โ€œThere are billions of dollars being paid for professional content, but not to media companies.

Companies in San Francisco that are buying [data] on the black market, that money needs to flow through to the media companies creating that content, and that will only happen when thereโ€™s a law that requires the tech platforms to participate in a fair digital marketplace,โ€ he said, citing Norway and South Africa as two nations exploring similar regulations.