ANALYSIS

UK School Uses AI to Remove 200 Books Including Orwell’s 1984 From Library

M megaone_admin Mar 27, 2026 2 min read
Engine Score 7/10 — Important

This story highlights significant ethical concerns regarding AI's application in education and content moderation, prompting important discussions for various stakeholders. Its recency and potential for widespread impact make it a noteworthy development in the AI landscape.

Editorial illustration for: UK School Uses AI to Remove 200 Books Including Orwell's 1984 From Library

A school in Greater Manchester, UK, used an AI classification tool to remove nearly 200 books from its library, including George Orwell’s 1984, Stephenie Meyer’s Twilight, Dan Brown novels, works by J.K. Rowling, and Michelle Obama’s memoir. The AI system deemed the books inappropriate, triggering a purge that has drawn comparisons to the very censorship that Orwell’s novel warns against.

The removal reportedly began in November 2025 after the headteacher at Lowry Academy demanded the removal of Laura Bates’ non-fiction book Men Who Hate Women, which examines online misogyny and extremism. The school librarian, who objected to the decision, was placed under investigation and subsequently signed off work due to stress. The library was temporarily closed as a safeguarding measure.

Index on Censorship, a UK organization that campaigns against censorship worldwide, reported on the incident, raising concerns about the use of AI tools to make content moderation decisions in educational settings. Critics argue that outsourcing moral judgments about literature to an automated system allows institutions to launder accountability for censorship — the AI made the decision becomes a shield against scrutiny of the humans who configured and deployed it.

The irony of an AI system removing 1984 — a novel fundamentally about surveillance, censorship, and the control of information — has generated significant public commentary. The removal of books covering diverse topics from dystopian fiction to political memoir to young adult romance suggests either an aggressively configured filtering system or a tool that classifies broadly without understanding context.

Students described the library as a safe place, particularly LGBTQ+ and neurodivergent pupils who used it as a quiet refuge. The closure and book removal removed both the physical space and the diverse perspectives that made it valuable to these students. Whether the books will be restored depends on an ongoing review process that the school has not made public.

The incident joins a growing pattern of AI-assisted content moderation decisions in schools, libraries, and public institutions. When AI tools designed for one context — such as filtering explicit web content — are applied to curating library collections, the results can be crude and counterproductive. The question is not whether schools should curate their libraries, but whether delegating that curation to AI classification systems produces outcomes that anyone would defend when examined individually.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy