Everton Football Club’s first competitive match at its new Hill Dickinson Stadium generated the kind of coverage that accompanies moments of sporting transition. A new ground, a strong performance, and an atmosphere that supporters described as electric. However, the narrative shifted from football to measurement. Headlines and social media posts confidently repeated the claim that crowd noise had reached 126 decibels, making the stadium the loudest in the Premier League and the fifth loudest in world football. The number was precise, scientific in tone, and widely shared. It is also a lie.
There was no official decibel reading taken during the match. No data released by Everton Football Club. No verification by the Premier League or an independent acoustic authority. This does not mean the stadium was not loud, or even exceptionally loud. It means that the specific claim being circulated existed only as a statement, not as an observed or recorded fact. The distinction between those two things may appear pedantic, but it is foundational to how knowledge, legitimacy, and authority are produced.
What makes this episode analytically significant is that it was not a conventional case of misinformation. The number did not spread because of a single false report or a misunderstanding that was passively repeated. Instead, it emerged through a process in which artificial intelligence systems were actively nudged, corrected, and trained into reproducing an unverified claim.
The claim began to circulate independently of its origin. Screenshots of AI-generated responses were shared in supporter groups. Posts referencing the number appeared across platforms. Familiarity replaced verification. Journalists encountering the claim encountered it not as an isolated assertion, but as a repeated and apparently corroborated fact. The presence of an AI-generated answer gave the appearance that the information had already been synthesised from reliable sources. In practice, the system had simply learned which answer users appeared to want.
Large language models do not verify facts or assess evidentiary quality. They predict plausible outputs based on patterns in data and interaction. Repetition increases confidence, not accuracy. This characteristic becomes particularly consequential when applied to emerging events or data voids, where authoritative information does not yet exist. In such contexts, AI systems do not wait for facts to solidify; they fill the gap with the most statistically plausible narrative available.
Criminology offers a useful framework for understanding this dynamic. You can listen to this new podcast episode where I go into further in depth here:
Classic work on moral panic and deviancy amplification, associated with scholars such as Stanley Cohen, emphasised that social reaction does not merely respond to behaviour but actively shapes it. The problem was never simply that media narratives were inaccurate. Rather, attention, repetition, and institutional response produced new social realities. What we are witnessing now is a technologically mediated version of the same process. Reaction creates reality, but AI now sits inside the feedback loop.
The implications for fraud and financial crime are profound. Fraud prevention depends not on philosophical truth, but on operational truth: what is counted, labelled, and acted upon within systems. Yet fraud data is rarely gold-standard. Confirmed fraud is scarce, prosecutions are slow, and outcomes are fragmented. Much of what practitioners work with consists of suspicion, inference, and probabilistic judgement. When AI systems collapse the distinction between intelligence and evidence, they institutionalise uncertainty while presenting it as certainty.
This is not an argument against artificial intelligence. It is an argument against uncritical trust. AI outputs should be treated as leads, not facts. Provenance must be reintroduced into workflows. Screenshots should never become evidence. Small, carefully audited gold-standard datasets are essential anchors in an environment saturated with plausible noise. In media and public communication, one principle should be non-negotiable: chatbots are not primary sources.
The most dangerous capacity of AI is not fabrication itself. It is the ability to make fabricated claims feel as though they were always true.
3rd Floor, 86-90 Paul Street, London, England, United Kingdom, EC2A 4N
© 2025. The Financial Crime Lab. All Rights Reserved