Imagine a world where every company claims to be “AI-enabled,” but few can prove it. This isn’t a dystopian fiction – it’s today’s business reality, where the gap between AI hype and actual capability is widening into a chasm that could swallow investor confidence and consumer trust. The recent revelation about restructuring firm Coots & Boots serves as a stark warning: when a company’s “lead for Marketing and Corporate Communications” turns out to be 97% likely to be AI-generated, what does that say about the authenticity of their entire AI transformation narrative?
Financial Times Alphaville discovered that Emilia Carri�re – described on C&B’s website as bringing “calm professionalism and sharp editorial instinct” with interests in Russian history and fine wine – appears to be a digital phantom. An AI detection tool put the odds of her picture being computer-generated at 97%, and no trace of her exists online. When contacted, director Duncan Coutts vaguely referenced “investment in technology” but refused to confirm whether this included their fictional spokesperson. This isn’t just about one firm’s questionable ethics; it’s a symptom of a broader problem where AI is becoming a marketing buzzword rather than a genuine capability.
The Confidence Problem: When AI Gets It Wrong
While companies create fake AI personas, real AI systems are struggling with basic accuracy. A WIRED investigation found ChatGPT regularly inserting incorrect products when asked what their reviewers recommend. In one test, the chatbot replaced WIRED’s actual top TV pick with a completely different model, then admitted: “I took WIRED’s actual top pick and replaced it with a more generic ‘similar category’ option. That’s not faithful to what you asked.” This pattern repeated across headphones, laptops, and other categories, with ChatGPT sometimes recommending products that haven’t even been tested yet.
“Large language model hallucinations make everything harder, especially for journalists,” says WIRED’s headphone expert Ryan Waniata. “We’re trying to do good work, and when it’s not being appropriated or improperly attributed, it’s being misquoted or incorrectly incorporated into search queries.” The irony is palpable: while companies use AI to create fake human representatives, AI systems struggle to accurately represent actual human expertise.
The Productivity Paradox: AI’s Mixed Impact on Jobs
This accuracy gap matters because businesses are making billion-dollar decisions based on AI’s perceived capabilities. Tech giants including Google, Amazon, Meta, Pinterest, and Atlassian have announced or warned of workforce reductions linked to AI developments, with Amazon cutting about 30,000 corporate workers since October partly to offset AI investment costs. Meta plans to nearly double spending on AI this year while implementing hiring freezes and further job cuts.
“I think that 2026 is going to be the year that AI starts to dramatically change the way that we work,” says Meta CEO Mark Zuckerberg. Block CEO Jack Dorsey echoes this: “Intelligence tools have changed what it means to build and run a company… A significantly smaller team, using the tools we’re building, can do more and do it better.”
But is this narrative entirely accurate? Some experts question whether AI is a genuine driver or a convenient narrative to mask cost-cutting and shareholder pressure. “Pointing to AI makes a better blog post,” says tech investor Terrence Rohan. “Or it at least doesn’t make you seem as much the bad guy who just wants to cut people for cost-effectiveness.”
The Counter-Narrative: AI as Job Creator, Not Destroyer
Not everyone sees AI as a job apocalypse. Stanford University professor Erik Brynjolfsson argues that rather than eliminating jobs, AI will transform roles, creating new positions like ‘chief question officer’ and ‘agent fleet manager.’ “The real value is defining the right questions,” says Brynjolfsson. “Understanding the problems that need to be solved, defining them in a way that really are useful to people. So those who can identify those opportunities are going to be more valuable than ever before.”
He emphasizes that AI acts as a complement to human skills, enhancing productivity and expanding the software development field by enabling more people to create applications through natural language. “In some cases, it does replace what they’re doing,” Brynjolfsson acknowledges. “But at the same time, it helps people be twice or even 10 times more productive.”
The Specialization Gap: Where AI Still Fails
Even as AI promises to transform industries, it struggles with specialized tasks. The Financial Times Alphaville conducted a weekly charts quiz testing three AI models (Claude, ChatGPT, and Gemini) alongside human participants on identifying financial data series from charts. The results were telling: while humans significantly outperformed AI models, with eleven human participants correctly identifying Fenix International Ltd employee data and almost 90% correctly identifying Microsoft’s dividend per share chart, no AI models correctly identified any of the three charts.
This specialization gap suggests that while AI might handle general tasks, it still falls short in domain-specific expertise – exactly the kind of work that restructuring firms like Coots & Boots claim to be enhancing with AI.
The Investment Reality: Billions Pouring Into Unproven Systems
Despite these limitations, investment continues at a staggering pace. Venture capitalists are pouring hundreds of millions into AI satellite startups like Starcloud and Aetherflux, which plan to launch AI data centers into space. Starcloud raised $170 million at a $1.1 billion valuation, while Aetherflux is in talks to raise $300 million at a $2 billion valuation. “By moving AI compute to space, we unlock access to unlimited solar power and completely remove the energy bottleneck,” says Starcloud’s co-founder and CEO Philip Johnston.
Meanwhile, French AI startup Mistral has raised $830 million in debt financing to build Nvidia-powered data centers across Europe, aiming to provide sovereign AI alternatives to US tech giants. “Scaling our infrastructure in Europe is critical to empower our customers and to ensure AI innovation and autonomy remain at the heart of Europe,” says CEO Arthur Mensch.
The Path Forward: Authenticity Over Hype
So what should businesses do in this landscape of exaggerated claims and genuine potential? First, they need to distinguish between AI as a marketing tool and AI as an operational capability. The Coots & Boots example shows how easily the line can be crossed when companies prioritize perception over reality.
Second, businesses must recognize AI’s current limitations. The WIRED tests demonstrate that even sophisticated systems like ChatGPT can’t reliably handle tasks requiring up-to-date, verified information. The Financial Times quiz shows AI struggling with specialized financial analysis. These aren’t reasons to abandon AI, but they are reasons to implement it thoughtfully, with human oversight.
Finally, companies should focus on how AI complements human workers rather than simply replacing them. As Brynjolfsson notes, “The worldwide software developer population is expected to expand rapidly, not contract.” The most successful implementations will be those that enhance human capabilities rather than attempting to automate everything.
The lesson from Coots & Boots’ fake spokesperson isn’t just about one firm’s questionable ethics. It’s about an industry-wide tendency to overpromise and underdeliver on AI capabilities. As businesses navigate this complex landscape, they would do well to remember that authenticity matters – whether in human representatives or artificial intelligence. The companies that succeed won’t be those with the most impressive AI marketing, but those with the most genuine AI capabilities, implemented thoughtfully and transparently.

