This post, authored by Paul Birch, is republished with permission from The Daily Sceptic
In light of recent revelations regarding West Midlands Police’s use of artificial intelligence (AI) to fabricate information about Israeli football fans, you would think that the police would be a little hesitant on the wider use of such technology. But you would be wrong.
In a recent interview with the Telegraph, Sir Andy Marsh, the head of the College of Policing, said that police were evaluating up to 100 projects where officers could use AI to help tackle crime. This includes utilising such things as “predictive analytics” to target criminals before they strike, redolent of the 2002 film Minority Report. The aim, according to Home Secretary Shabana Mahmood, is to put the “eyes of the state” on criminals “at all times”. This is to be outlined further in an upcoming white paper on police reform.
The expansion of AI use in British policing is continually being sold as innovation, efficiency and protection. But in reality it marks a decisive step towards a society in which liberty is treated as a risk to be managed. Wrapped in the language of safety and reform, AI represents a quiet but profound transformation of the state’s relationship with its citizens: from upholder of the law to permanent overseer of behaviour.
Every policing area already has an intelligence unit responsible for ‘predictive analytics’. Crimes which are logged into police indices are scrutinised by analysts, who then produce reports and briefings relating to crime hotspots and the like. Appropriate police resources can be subsequently directed to a particular location at a particular time in order to tackle or prevent the crime. AI can never adequately replace a team of trained professionals going through data. It probably can, however, do it at a fraction of the cost, which is more important to most senior officers than civil liberties. Not so much Minority Report as Heath-Robinson.
The core injustice is clear. Policing in a supposedly free society responds to crimes that have already occurred, or prevention involving highly visible uniformed patrols. So-called predictive policing reverses that logic by directing the power of the state at everybody, nearly all of whom will have done nothing illegal. It will be based on statistical IT guesses about what they might do. This is not a mere technical adjustment to policing as some would have us believe; it is a complete change of emphasis to everyone being potentially guilty until proven innocent. Mass surveillance (for that is what it is) will be imposed without charge, without trial and without a verdict due to there being no formal accusation.
Defenders of this approach pretend that there is no threat to an individual’s liberty. That is patently false. Liberty is eroded wherever the state inserts itself permanently into a person’s life. Persistent scrutiny is a form of soft coercion. Knowing that your movements, associations and behaviour are being logged and evaluated by the state is tantamount to coercion. A society in which citizens must behave as if they are always being watched is not free; it is merely orderly.
Worse still, this system destroys any real degree of accountability. Decisions that once belonged to identifiable officers will be attributed to the system or the programme. When mistakes occur, as they inevitably will, there will be no discerning human judgement to interrogate the system, as operators will almost certainly defer to the machine in the first instance. Power will diffuse upward into institutions and outward into private sector software developers, while the citizen will be left in some form of legal limbo facing an unchallengeable process. An algorithm cannot be cross-examined or shamed.
The claim that these systems are objective is also dangerous. AI will not discover truth; it will go through past policing data, solidify past errors and enforce them with mathematical certainty. Historical mistakes will become future risk indicators.
Nobody in Government is stating that the rollout of AI is an experiment. Surveillance infrastructure never retreats. Every database, camera and algorithm built for the worst offenders will inexorably become, over time, available for broader use. Today the target is violent or prolific criminals; tomorrow it could be protest organisers or those deemed by the political class to be a problem. We have already seen this with the policing of social media and the use of Non-Crime Hate Incidents. How can the police be trusted with transformational technology such as this?
Efficiency is the final lie. Any assumed reduction of paperwork, better targeting and smoother processes do not justify expanding state surveillance. And in any case, during my time in the police, the introduction of new technology never reduced the amount of bureaucracy – it merely transferred it from the page to the screen, and often increased it. Swift injustice is not progress.
Enshrining the use of artificial intelligence across UK law enforcement will abolish any anonymity in the public space and replace it with permanent identifiability. Every journey will become traceable, every gathering recordable, every deviation from the norm potentially suspicious. Yes, this already happens during the course of a police investigation, but that is to establish the movements and behaviours of identifiable suspects, not to generally monitor the entire populace.
This is not policing by consent, as per the original Peelian Principles; it is policing by omnipresence and, unlike watching a Hollywood movie, we won’t be able to walk away if we don’t like it.
Paul Birch is a former police officer and counter-terrorism specialist. You can read his Substack here.
Your support is crucial in helping us defeat mass censorship. Please consider donating via Locals or check out our unique merch. Follow us on X @ModernityNews.
More news on our radar














