Imagine a world where software can be rewritten in days instead of years, with AI tools generating code that’s faster, more accurate, and free from the licensing constraints of the original. That world is here, and it’s creating a legal and ethical storm in the open source community. The recent controversy over the chardet library rewrite using Claude Code has exposed fundamental questions about what happens when artificial intelligence meets software licensing.
The chardet Controversy: A Case Study in AI Licensing
Last week, developer Dan Blanchard released version 7.0 of chardet, a popular Python library for detecting character encoding. What made this release extraordinary wasn’t just the 48x performance boost or improved accuracy – it was how it was created. Blanchard used Claude Code to completely rewrite the library in just five days, moving it from an LGPL license to the more permissive MIT license.
The original creator, Mark Pilgrim, immediately objected on GitHub, arguing this amounted to an illegitimate relicensing. “Their claim that it is a ‘complete rewrite’ is irrelevant, since they had ample exposure to the originally licensed code,” Pilgrim wrote. “Adding a fancy code generator into the mix does not somehow grant them any additional rights.”
The “AI Clean Room” Defense
Blanchard’s defense rests on what he calls an “AI clean room” process. He started with an empty repository, wrote design documents and requirements, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code. JPlag similarity statistics show only 1.29% structural similarity between version 7.0 and previous versions, compared to 80% similarity between earlier human-written versions.
“No file in the 7.0.0 codebase structurally resembles any file from any prior release,” Blanchard writes. “This is not a case of ‘rewrote most of it but carried some files forward.’ Nothing was carried forward.”
The Broader Implications for Open Source
This isn’t just about one library. The chardet controversy reveals how AI is fundamentally changing software development economics. As open source evangelist Bruce Perens told The Register: “The entire economics of software development are dead, gone, over, kaput!”
But the reality is more nuanced. AI presents both unprecedented opportunities and significant challenges for open source. On one hand, tools like Anthropic’s Claude have helped identify high-severity bugs in Firefox more efficiently than traditional methods. Mozilla reported that AI found more critical bugs in two weeks than typically surfaced in two months.
The Dark Side: AI-Generated Noise and Burnout
On the other hand, AI is creating new problems. Daniel Stenberg, creator of cURL, describes how AI-generated security reports have overwhelmed his project. “The floodgates are open,” he says. “The rate has gone up too now; it’s more like one in 20 or one in 30 that is accurate. This rise has turned security bug report triage into ‘terror reporting.'”
cURL was forced to close its security bounty program due to the flood of low-quality AI-generated reports. Similarly, FFmpeg maintainers discovered Google’s AI had found minor security problems but wasn’t fixing them or paying for fixes – leaving volunteers to clean up the mess.
Quality and Reliability Concerns
Research suggests developers working with AI-assisted coding are actually 19% slower due to constantly revisiting and fixing AI-generated code. Worse yet, AI-generated code tends to have 1.7 times more issues than human-written code. These quality concerns have real-world consequences.
Amazon recently implemented new policies requiring senior engineers to sign off on AI-assisted changes after a series of outages linked to AI coding tools. The company’s ecommerce business experienced multiple incidents characterized by “high blast radius” and “Gen-AI assisted changes,” including a 13-hour interruption to AWS’s cost calculator in December 2025.
The Legal Gray Area
The legal status of AI-generated code remains largely unsettled. Courts have ruled that AI can’t be the author on a patent or the copyright holder on a piece of art, but software licensing presents unique challenges. As Free Software Foundation Executive Director Zo� Kooyman bluntly told The Register: “There is nothing ‘clean’ about a Large Language Model which has ingested the code it is being asked to reimplement.”
Yet others see this differently. Open source developer Armin Ronacher argues in a blog post that “if you throw away all code and start from scratch, even if the end result behaves the same, it’s a new ship.”
Finding the Right Balance
Linus Torvalds, creator of Linux and Git, offers a pragmatic perspective: “I’m a huge believer in AI as a tool. I’m much less interested in AI for writing code and far more excited about AI as the tool to help maintain code, including automated patch checking and code review before changes ever reach me.”
This balanced approach recognizes AI’s potential while acknowledging its limitations. As Italian coder Salvatore “antirez” Sanfilippo wrote on his blog: “There is a more fundamental truth here: the nature of software changed; the reimplementations under different licenses are just an instance of how such nature was transformed forever. Instead of combatting each manifestation of automatic programming, I believe it is better to build a new mental model, and adapt.”
The Path Forward
The chardet controversy isn’t just about one library or one license – it’s about how we adapt our legal frameworks, development practices, and community norms to a world where AI can rewrite software in days. As Stormy Peters, AWS Head of Open Source Strategy, observed: “What has actually happened is that people are submitting all of the slop that they’re generating out of AI.”
The solution lies in finding the right balance between leveraging AI’s capabilities and maintaining human oversight, quality standards, and legal clarity. As the open source community grapples with these questions, one thing is clear: the rules of software development are being rewritten, and we all need to understand what that means for the future of technology.

