According to a recent US study by publishing tech firm ePublishing, few publishers are actively using AI to create their editorial content – though around a third reported using AI to write promotional and marketing copy, headlines, or social media content. Around a fifth said they were using AI to draft articles.
Interview transcription (32%) and image creation (27%) were popular uses.
While the US typically leads the UK in AI adoption – around 46% of US office workers use AI for their job at least once a week, compared to 29% of UK office workers, according to Asana research – many UK publishers are also weighing up just how closely they should get involved with AI.
Like their American counterparts, UK publishers’ main concerns surround editorial integrity – and not just from internal use of AI.
Earlier this month, Press Gazette reported that many publishers are taking a combative stance against AI companies hoping to improve their language models by feeding them high-quality copyrighted material.
DMG Media, publisher of the Daily Mail, Metro, and i, said in the autumn of 2023 that it was “actively seeking advice” for legal action, particularly over the use of its headline, bullet point and article text structure being used without permission to train AI.
Encouraged further by the New York Times’ high-profile legal cases against Open AI and Microsoft, filed in December 2023, several UK publishers have either taken a stand against AI or have encouraged others to withhold access to their material, despite lucrative cash offers from AI firms.
In March, Reach chief executive Jim Mullen said in the company’s full-year results, as reported in Press Gazette, that the company was not in any active discussions with AI developers, suggesting that fellow publishers join with Reach to approach developers as a united front.
He said: “We would prefer that we don’t get into a situation where we did with the referrers ten years ago and gave them access and we became hooked on this referral traffic and we would like it to be more structured.
“We produce content, which is really valuable, and we would like to license or agree how they use our base intelligence to actually inform the AI and the open markets.”
He added that if publishers were to work together within the News Media Association (NMA) ranks, then they would have a strong bargaining position.
“It only takes one publisher to break away and start doing deals, and then it sort of disintegrates,” he noted.
Reach has also steadily increased its own use of AI.
While some UK publishers are outwardly hostile to AI – Mumsnet, for instance, is currently suing OpenAI for copyright breaches – many have taken cash deals. The Financial Times was the UK’s first major paper to make a deal, joining newsbrands and publishers around the world, like Time, Der Spiegel, The Atlantic, Informa, Le Monde, Reuters and the Associated Press.
News Corp’s deal is the largest reported, with The Wall Street Journal putting a reported value of $250m (£196m) over five years on the deal.
News Media Association CEO Owen Meredith told Printweek that AI had strong potential to help journalists and newsrooms – but there were plenty of issues to fix before publishers were completely confident.
He said: “AI will change all our lives and provide us with powerful tools to assist and change the way we all work, including journalists and newsrooms. With careful human oversight, AI is a useful tool for news publishers.
“But we must address the fundamental issues around the use of news publishers' content without consent, transparency, or reward in order to create a sustainable future for journalism.
“In this age of deepfakes and fake news, a strong and robust copyright framework is essential to uphold the integrity of news media."
Sajeeda Merali, chief executive of the professional publishers’ association (PPA), added that retaining control of copyrighted content was a priority for publishers.
She said: “The adoption and expansion of AI use is complex and has multiple implications for our sector. The primary and immediate concern is that our members are suitably compensated for their work that is being used to train these large language models (LLMs). As an industry we need to ensure that all content produced by our publishers is protected and monetised accordingly.
“We recognise that this technology is nascent and ever changing, may involve inherent bias and not always give accurate results. So, while training AI on accurate, high-quality journalism from our publishers will help that, it is key that they are compensated accordingly, and licensing is one way to address that.”