In the rapidly evolving world of artificial intelligence, generative AI (genAI) has shown remarkable capabilities, from creating realistic images to drafting complex documents. The use of AI in the news and content creation process is inevitable and has many advantages, but the sophistication of the technology and its use is still evolving. These advancements and advantages come with a significant caveat: so-called “AI hallucinations”. This phenomenon occurs when AI generates content that is plausible but inaccurate, or even entirely fabricated.
Eliminating hallucinations completely is likely to be impossible, because large language models construct sentences by predicting the next word in a sequence, based on statistics learned from their training data. In that sense, genAI might be less catchily, but more accurately, described as a data driven response algorithm.
Whatever we call it, news organisation and content creators that use genAI outputs without proper care and attention to the risk of AI hallucinations run a variety of legal risks. Breach of copyright, data protection, defamation, privacy or consumer law, to name a few...
If it’s any comfort, the media industry and the legal profession are in many respects in the same boat. There have already been several cases where lawyers have attempted to use case law that was hallucinated by genAI to support their arguments in court. Media outlets and law firms that find themselves associated with headlines of this kind are likely to see their reputations damaged and trust in their content diminished.
Earlier this year, Apple itself become the news story, as it had to suspend an AI feature that made inaccurate summaries of news headlines. The tech company received a complaint from the BBC after the AI-generated service issued a news alert branded with the corporation’s logo falsely telling some iPhone users that Luigi Mangione, who is accused of killing the UnitedHealthcare chief executive, Brian Thompson, had shot himself. Other false notices that carried the BBC logo included one claiming Luke Littler had won the PDC World Darts final before playing in it and another that the tennis player Rafael Nadal had “come out” as gay. Other news organisations were affected by the errors, with a summary of New York Times alerts wrongly claiming that Israel’s prime minister, Benjamin Netanyahu, had been arrested.
The increasing prevalence of what is often colloquially referred to as ‘AI generated slop’ in our newsfeeds has, therefore, created a demand for fact verification type reporting – as practised by the likes of BBC Reality Check, Full Fact and NewsGuard.
Some other recent headlines generated by AI hallucinations include:
- a Court ordering Air Canada to honour a discount mistakenly offered by its AI chatbot
- a journalist reporting on a crime being mistakenly identified as the perpetrator
- an Australian mayor who blew the whistle on a bribery scandal is said to have been convicted of the crime
- a false claim that a US radio host was being sued for embezzlement and fraud
On a similar note, Press Gazette has recently reported on the prevalence of stories based on quotes from expert individuals who do not, in fact, exist. With many media outlets relying on a small number of journalists with more work to do than there is time in the day, the temptation to publish a story without fully verifying the details first must sometimes be hard to resist.
What are the risks and challenges?
Let’s consider for a moment a worst-case scenario, where a publisher is accused of defamation. This hasn't yet been tested in the UK in the context of AI-generated content but, in principle, where defamatory content is published, legal action can be brought against the “author", "editor" and "publisher" of the statement. This could include the GenAI company, but most likely will be claims against the “publishers” of the content (such as journalists or even members of the public sharing the content). Our reputation management team is closely monitoring AI-related developments, and is well-placed to advise individuals who think they may have been defamed (by AI-generated content or otherwise), and organisations in the media sector seeking to minimise the risks of legal action being brought against them in relation to published content. In short, even if you're relying on AI as your source, the same fact checking principles should be applied as with all content.
Defamatory statements also tend to include individuals’ “personal data”, ie information relating to them or from which they can be identified, which also engages data protection principles. Such information is subject to legal protections, including those requiring it to be accurate and only to be shared or otherwise processed in particular circumstances. Information relating to an individual’s criminal offence, health, sex life, religious or political opinion is also afforded greater protection as it's classed as sensitive personal data. The General Data Protection Regulation and its principles may be relevant in the context of GenAI publications, and can provide affected individuals with an additional cause of action. Our reputation management team and data protection team work closely together to advise clients in a comprehensive way, considering the intricacies of these different legal regimes.
Deepfakes
AI unintentionally producing inaccurate content is worrying, but of greater concern is the fake content, often referred to as ‘deepfakes’, that AI tools can produce in the hands of malicious actors.
Deepfakes are video or audio content that have been created or manipulated using AI. They often portray public figures, such as politicians, athletes or celebrities, doing or saying something which they have not actually done or said. Deepfakes are also being used in scams, and recently resulted in a finance worker paying out £25 million to fraudsters following what they thought was a video call with their CFO that was actually generated with deepfake technology.
The legal framework for addressing deepfakes is evolving, as the Government seeks to balance the competing objectives of promoting innovation (and avoiding stifling AI progression through over regulation), while ensuring the law adequately protects the investment original creators make.
If deepfakes are used for criminal purposes, such as to trick facial recognition software in order to fraudulently access your online banking, there is clear legal recourse. Additionally, the UK government is cracking down on sexually explicit deepfakes and has indicated that creation of such content is to be made into a criminal offence with a maximum sentence of two years’ imprisonment.
In other, less obvious contexts, the relevant legal rights in England which provide protection against deepfakes are patchwork at best. We have already published detailed notes on the legal remedies that may be available (see links below). The legal risks on relying on, or re-publishing, such content should always be fully explored prior to publication.
Conclusion
While the paying public expects more in the way of entertainment from the media than it does from its lawyers – people do expect to be able to trust what they read, in the same way that they expect to be able to trust the legal advice that they receive. At Mills & Reeve, we're grappling with many of the same issues faced by media businesses, as we seek to find ways to use AI to deliver our work more efficiently, without inadvertently relying on hallucinated material.
In the next article in our series we explore what news organisations and content creators can do if their own work is being used by AI companies to train the algorithm and look at early contests in this complex topic.
Further reading
We've published other articles on the use of AI in the content creation process and the legal risks of deep fakes. If you wish to read further please see:
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.