Anthropic is expanding its Claude AI chatbot’s ability to control computers like humans, rolling out the “Computer Use” feature to Cowork and Code modes for Pro and Max subscribers. This development represents a significant step in AI’s evolution from conversational tools to active digital assistants, but it arrives amid growing scrutiny of AI’s broader societal impact and corporate tensions.
Beyond Chatbots: AI as Digital Operators
The enhanced Computer Use feature allows Claude to open documents, use browsers, and operate developer tools through screenshots and direct interface interaction. Initially available only in macOS desktop apps with Windows support coming soon, this functionality positions Claude alongside competitors like OpenAI’s Operator and Microsoft’s Copilot in the race to create AI that can navigate digital environments autonomously.
Anthropic emphasizes that Claude will prioritize faster “Connector” solutions for services like Gmail and Slack before resorting to Computer Use, with the system designed to recognize risky tasks and prompt injections. However, the company warns users to be mindful of sensitive information visible on screens during operation.
Corporate Tensions and Government Scrutiny
This technical advancement comes as Anthropic faces significant challenges beyond the lab. According to court documents obtained by TechCrunch, the Pentagon designated Anthropic as a supply-chain risk on March 3, 2026, despite Under Secretary Emil Michael emailing CEO Dario Amodei on March 4 stating the two sides were “very close” on key issues. Anthropic’s lawsuit argues this designation violates the First Amendment as retaliation for the company’s ethical stances against using AI for mass surveillance or lethal autonomous weapons without human intervention.
Senator Elizabeth Warren has criticized the Pentagon’s decision, calling it “retaliation” in a statement to TechCrunch. “I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards,” Warren said. A hearing scheduled for March 24 before Judge Rita Lin will determine whether to grant Anthropic a preliminary injunction.
User Concerns: Hallucinations Top Job Loss Fears
While corporate battles unfold, user experiences reveal different priorities. A Financial Times survey of over 80,000 Claude users across 159 countries found that AI hallucinations (mistakes) are the top concern at 27%, surpassing job displacement worries at 22%. The study, conducted in 70 languages, showed 32% of users reported increased productivity, with researcher Saffron Huang noting, “The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure.”
However, researchers like Google DeepMind’s Divy Thakkar expressed skepticism about the study’s methodology, citing selection biases and short survey-style questioning. Economist Ilan Strauss of the AI Disclosures Project called it “an excellent piece of work” but cautioned that “its conclusions should be taken with a grain of salt.”
Youth Adoption and Regulatory Responses
AI’s reach extends beyond professional users to younger demographics. A German study by DAK-Gesundheit and UKE found that 20.8% of children aged 10-17 use AI programs like ChatGPT or Gemini several times weekly, with 6.4% using them daily. The study of 1,005 children revealed that some confide in chatbots things they wouldn’t share with friends, raising concerns about emotional dependence on AI systems.
DAK-Chef Andreas Storm called for legislative action, stating, “For a sensible age regulation, we now need rapid legal regulation by the summer break.” German political parties CDU and SPD have advocated for a social media ban for children under 14, reflecting growing regulatory attention to youth AI use.
Hardware Partnerships and Infrastructure
Behind these software developments lies significant hardware investment. Anthropic’s Claude runs on over 1 million Amazon Trainium2 chips, part of AWS’s $50 billion investment deal with OpenAI that includes supplying 2 gigawatts of Trainium computing capacity. Amazon’s Trainium chips, now in their third generation, offer up to 50% cost savings compared to traditional cloud servers, according to AWS Director of Engineering Mark Carroll.
Meanwhile, robotics companies are forming strategic partnerships to advance physical AI capabilities. German robotics firm Agile Robots recently partnered with Google DeepMind to implement Gemini Robotics models into industrial robots for sectors including electronics manufacturing and automotive. This follows similar partnerships between Google DeepMind and Boston Dynamics, as well as Neura Robotics’ collaboration with Qualcomm.
Balancing Innovation with Responsibility
As AI systems gain more control over digital and physical environments, companies face increasing pressure to balance innovation with ethical responsibility. Anthropic’s simultaneous technical advancement and legal battles illustrate the complex landscape where AI development intersects with government regulation, corporate ethics, and user concerns.
The expansion of Claude’s Computer Use feature represents more than just another product update – it’s part of a broader transformation where AI systems are becoming active participants in digital workflows. How companies navigate the accompanying ethical, legal, and societal challenges will determine whether these technologies enhance human capability or create new vulnerabilities in our increasingly automated world.

