Courts Worry About AI While Corrections Can't Even Count Community Sentences Properly
As the justice system frets over deepfakes, Corrections data shows 5,610 community sentences filed under 'inadequate data available' in 2024 - the highest in 25 years. That's one in every twenty community sentences with no proper classification.
Key Figures
The Ministry of Justice is worrying about AI and deepfakes disrupting the courts. Meanwhile, the Department of Corrections can't even properly classify what kind of community sentences it's handing out.
In 2024, 5,610 community sentences were filed under the category 'inadequate data available'. That's not a rounding error. That's the highest number in 25 years. You'd have to go back to 1999 to find anything comparable. (Source: Department of Corrections, community-sentences)
This matters because community sentences are supposed to be an alternative to prison. Home detention. Community work. Supervision. Intensive supervision. Each type has different conditions, different costs, different success rates. But one in every twenty community sentences in 2024 has been dumped into a catch-all category that tells you nothing.
The trajectory is alarming. In 2018, just 309 sentences fell into this category. By 2019, it had dropped to 90. Then 93 in 2020. It stayed manageable until 2022, when it suddenly jumped to 1,977. Two years later, it's nearly tripled to 5,610.
What changed? The justice system got busier, sure. Court backlogs grew. But this isn't about volume. This is about data quality collapsing at the exact moment we need it most. Politicians argue constantly about whether community sentences work, whether judges are too soft, whether alternatives to prison reduce reoffending. How are we supposed to answer any of those questions when one in twenty sentences isn't even being categorised properly?
This isn't a paperwork problem. It's a transparency problem. When Corrections can't tell you what kind of sentence someone received, you can't track outcomes. You can't measure success rates. You can't compare costs. You can't hold anyone accountable.
The government wants to be taken seriously on justice reform. It wants data-driven policy. But the data is a mess, and it's getting messier.
Courts fretting about AI fakery is a legitimate concern. But before we worry about technology making evidence unreliable, maybe we should worry about our own systems failing to record basic facts about sentences that are already being served.
5,610 sentences. No proper record of what they actually are. That's not a future problem. That's happening right now.
This story was generated by AI from publicly available government data. Verify figures from the original source before citing.