BLOG

AI Startup Nota Stole 70+ Stories From Journalists It Claimed to Save

Z Zara Mitchell Apr 6, 2026 5 min read
Engine Score 7/10 — Important
Editorial illustration for: AI Startup Nota Stole 70+ Stories From Journalists It Claimed to Save

Nota, an AI startup that positioned itself as a cure for America’s news deserts, plagiarized more than 70 stories from local journalists — including from newsrooms whose parent companies were paying Nota for the service — according to a joint investigation by Poynter and Axios published in April 2026. While Nota was lifting reporters’ copy word-for-word, its platform displayed a donation button asking readers to help “strengthen local journalism.”

The exposure collapsed not just one company’s credibility but a broader industry argument: that AI startups can be trusted to support journalism infrastructure without oversight or content audits.

What Nota Promised Local News Publishers

Nota marketed itself as an AI-powered solution to the local journalism crisis — a platform capable of auto-generating coverage for underserved communities that had lost their local outlets. The pitch was credible enough to attract major media clients. Nexstar Media Group, one of the largest local television station operators in the United States with more than 200 stations, signed a deal with Nota worth approximately $600,000.

The underlying need is real. More than 3,200 U.S. newspapers have closed since 2005, according to Northwestern University’s Medill School of Journalism, leaving millions of Americans in communities with diminished or no local news coverage. Nota’s pitch — AI-generated hyperlocal news to fill that void — aligned with genuine industry anxiety.

What Nota delivered was neither AI-generated nor original. It was plagiarism with a product wrapper.

Nota AI News Plagiarism: 70+ Stories, No Attribution

Poynter’s investigation identified more than 70 stories on Nota’s platform that had been copied — in whole or in substantial part — from working journalists at local outlets. The stories were republished without attribution, without links to original sources, and under Nota’s own branding. Several reporters only discovered the theft after readers or colleagues flagged the stolen content.

The plagiarized content was not limited to small independent outlets. Some stolen stories came directly from newsrooms owned by Nexstar — meaning Nota’s paying client was simultaneously being robbed by the vendor it had paid to produce journalism. The Axios report documented additional examples beyond the initial Poynter count, confirming the pattern was systematic rather than incidental.

Nota was not making editorial errors. It was running a content extraction operation dressed as a journalism platform.

The Nexstar Deal: 0,000 to Steal From Yourself

The Nexstar situation warrants specific attention because it demolishes Nota’s entire value proposition in a single example. Nexstar paid approximately $600,000 — presumably for AI-generated local coverage to supplement its 200-station network — and received, in part, its own journalists’ work stripped of credit and republished through a third-party platform.

The arrangement is structurally equivalent to paying a contractor to furnish your office, then discovering the contractor was stealing furniture from your own stockroom. The vendor’s service was built on the client’s own inventory.

The AI industry’s rush to monetize “local news solutions” created exactly this vulnerability. Clients accepted Nota’s pitch without auditing content provenance — a failure that mirrors uncritical deal-making seen across AI’s media partnerships. For context on how AI companies are signing major content deals with limited scrutiny from affected stakeholders, see MegaOne AI’s coverage of how OpenAI’s $1 billion Disney deal blindsided content owners.

The Donation Page: Fundraising on Stolen Work

Nota’s reader donation mechanism is the clearest indicator of the company’s bad faith. While the platform actively plagiarized journalists’ reporting, it solicited contributions from readers under the explicit framing of “strengthening local journalism.” Readers who donated believed they were funding original community reporting. They were financing a content theft operation.

This is not peripheral context — it is the central moral failure of the Nota story. The company did not merely fail to deliver on its promises. It weaponized the emotional framing of local journalism advocacy to extract money from readers while systematically harming the journalists it claimed to support. That framing — “save local news” — has appeared in multiple AI startup pitches across 2025 and 2026.

The growing Humans First movement has been documenting exactly this pattern: AI platforms that position themselves as allies of human creators while extracting value from their work without compensation or credit.

How Poynter and Axios Exposed the Scheme

The Poynter Institute, which has tracked AI’s impact on journalism since at least 2023, led the primary investigation. Axios provided corroborating reporting with additional documented examples. The methodology involved cross-referencing Nota’s published content against archived, bylined work at local outlets — the same technique used by academic plagiarism detection tools.

What made detection possible was Nota’s apparent lack of effort to disguise the theft. Sophisticated plagiarism operations typically paraphrase heavily to defeat detection; Nota reproduced copy closely enough that reporters recognized their own sentences intact. The 70+ figure cited in reports is likely a floor, not a ceiling — investigators noted the pattern was ongoing at time of publication.

As of the investigation’s publication, Nota had not provided a substantive public response to the specific plagiarism allegations documented by Poynter and Axios.

The Legal Exposure

U.S. copyright law provides for statutory damages of up to $150,000 per work for willful infringement under 17 U.S.C. § 504(c)(2). At the documented minimum of 70 works, that represents potential liability exceeding $10.5 million — before legal fees, before additional works identified through discovery, and before any civil suits from affected outlets or their parent companies.

Publishers and wire services have pursued aggressive copyright litigation against AI companies since 2023, with the New York Times’ suit against OpenAI citing hundreds of thousands of allegedly reproduced articles. Nota’s documented conduct — verbatim copying, commercial use, and misleading attribution — meets the clearest standard for willful infringement available under current law.

What AI in Journalism Should Actually Look Like

The Nota scandal does not argue against AI’s role in journalism — it argues against one specific approach: replacing human journalism with stolen human journalism, rebranded. Legitimate uses of AI in newsrooms include transcription, data analysis, translation, public records summarization, and distribution tooling. None of these applications require lifting original reported work from other outlets.

MegaOne AI tracks 139+ AI tools across 17 categories, and content generation tools consistently show the widest gap between marketing claims and transparent content sourcing. The AI flood into weather apps demonstrated how quickly a “helpful AI” pitch can obscure questions about where underlying data actually originates. The same scrutiny applies to any AI product built on text.

Publishers evaluating AI journalism tools should require three things before signing: a transparent content provenance audit, contractual warranties against plagiarism with financial penalties, and clear reader disclosure when AI contributed to content production. The Nexstar deal — $600,000 paid to a vendor that was stealing from Nexstar’s own newsrooms — is the documented cost of skipping that due diligence. Nota promised to fill America’s journalism gaps. It filled them with other journalists’ work, with the bylines removed.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime