Imagine an assistant that not only schedules your meetings but also creates budgets, transcribes discussions, and monitors your desktop activities to suggest follow-ups. This isn’t science fiction – it’s Salesforce’s ambitious vision for Slack, unveiled this week with 30 new AI features that promise to transform the workplace communication tool into an automated business platform. But as companies rush to integrate AI into their core operations, a growing chorus of experts and data suggests we’re entering a complex era of productivity gains, security trade-offs, and workforce uncertainty.
The Slackbot Transformation
At a San Francisco event, Salesforce CEO Marc Benioff revealed what he called an “incredible journey” since acquiring Slack five years ago, boasting of “two and a half times revenue growth” and about a million businesses using the platform. The centerpiece of this transformation is Slackbot, now equipped with what Salesforce calls “reusable AI-skills” – customizable tasks that can be applied across different scenarios. A user can simply type “create a budget” for an upcoming event, and Slackbot will pull relevant information from company channels and connected apps, then automatically set up meetings with appropriate employees.
Perhaps more significantly, Slackbot now functions as a Model Context Protocol client, connecting to outside services including Salesforce’s Agentforce platform. According to Rob Seaman, Slack’s interim CEO, this allows the agent to “route work or prompt questions to Agentforce or any agent or app in your enterprise” without human intervention. The bot can also transcribe and summarize meetings, and – most controversially – operate outside Slack to monitor desktop activities including “your deals, your conversations, your calendar, and your habits” to make actionable suggestions.
The Productivity Paradox
While Salesforce positions these features as productivity breakthroughs, recent data reveals a growing disconnect between AI adoption and trust. A Quinnipiac University poll shows that while 51% of Americans use AI for research and other tasks, only 21% trust AI-generated information most or almost all of the time. “The contradiction between use and trust of AI is striking,” says Chetan Jaiswal, a computer science professor at Quinnipiac. “Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust.”
This skepticism extends to the workplace, where tech CEOs increasingly cite AI as justification for mass job cuts. Companies including Google, Amazon, Meta, Pinterest, and Atlassian have announced or warned of workforce reductions linked to AI developments. Meta plans to nearly double spending on AI this year while implementing hiring freezes and further job cuts, and Amazon has cut about 30,000 corporate workers since October, partly to offset AI investment costs. Tech investor Terrence Rohan suggests that “pointing to AI makes a better blog post. Or it at least doesn’t make you seem as much the bad guy who just wants to cut people for cost-effectiveness.”
The Counter-Narrative: AI as Job Creator
Not all experts agree with this dystopian view. Erik Brynjolfsson, a Stanford University professor and AI expert, argues that rather than eliminating jobs, AI will transform roles and create new positions. “The real value is defining the right questions,” Brynjolfsson says. “Understanding the problems that need to be solved, defining them in a way that really are useful to people. So those who can identify those opportunities are going to be more valuable than ever before.” He predicts that AI will expand the software profession, with potentially ten times as many people engaging in development through natural language interfaces rather than traditional coding.
This perspective finds support in the growing market for AI verification tools. Qodo, a startup that raised $70 million in Series B funding, develops AI agents for code review, testing, and governance to address the challenge of verifying AI-generated code. Founder Itamar Friedman notes that “95% of developers don’t fully trust AI-generated code,” highlighting the need for human oversight even as automation accelerates.
The Security Trade-Off
Slack’s expansion into desktop monitoring raises significant privacy concerns, even as Seaman claims that “privacy protections are built into this design and that users have the ability to adjust permissions as needed.” This tension between functionality and security is playing out across the tech landscape. Proton, the Swiss privacy-focused company, recently launched Proton Workspace as a “private alternative to Google Workspace and Microsoft 365” specifically for businesses that “don’t want their audio, video, and chat data logged or used to train AI.”
Proton CEO Andy Yen explains: “Companies are increasingly worried that their confidential business data is becoming business intelligence for Big Tech and turning to safer alternatives.” This reflects broader concerns about how major tech companies handle user data, particularly as AI systems require vast amounts of information for training and operation.
The Governance Challenge
The debate over AI control extends beyond corporate boardrooms to government agencies. Recently, the US Department of Designated Anthropic as a ‘supply chain risk’ due to a contract dispute over the use of its Claude AI model in classified military contexts. The Pentagon sought to renegotiate terms, arguing only US law should limit military use of technology, while Anthropic refused to allow its AI for lethal autonomous weapons and mass surveillance. A federal judge in California blocked the designation, calling it ‘arbitrary and capricious,’ highlighting the complex regulatory landscape emerging around powerful AI systems.
The Hardware Frontier
Meanwhile, advances in local AI processing are creating new possibilities and challenges. Ollama, a runtime system for operating large language models locally, recently introduced support for Apple’s MLX framework, promising significantly improved performance on Macs with Apple Silicon chips. This development comes as local models gain popularity, with OpenClaw racing to over 300,000 stars on GitHub. While local models still lag behind cloud-based alternatives in benchmarks, they offer privacy advantages and are becoming “good enough for some tasks users might normally pay a subscription for,” according to industry observers.
The Path Forward
As Slack’s AI makeover demonstrates, we’re entering a period where AI integration is becoming unavoidable for businesses seeking competitive advantage. The question isn’t whether to adopt these technologies, but how to do so responsibly. Companies must balance productivity gains against privacy concerns, workforce impacts, and security requirements. They need to consider not just what AI can do, but what it should do – and who gets to decide.
The coming years will test whether platforms like Slack can transform from communication tools into intelligent business partners without sacrificing user trust or creating new vulnerabilities. As Tamilla Triantoro, a professor at Quinnipiac, observes: “Americans are not rejecting AI outright, but they are sending a warning. Too much uncertainty, too little trust, too little regulation, and too much fear about jobs.” How companies like Salesforce respond to this warning may determine not just their success, but the shape of the workplace for years to come.

