In a White House ceremony last month, President Donald Trump singled out a man few Americans would recognize. “[People] ask me who the hell he is,” Trump said of Sriram Krishnan, his AI adviser. “And yet without him things, certainly on AI, would not function well.” This quiet acknowledgment reveals a fundamental shift in how artificial intelligence policy is being crafted in Washington – not through public debate or congressional hearings, but through the subtle influence of Silicon Valley insiders who’ve mastered the art of political persuasion.
The Connective Tissue Between Worlds
Before his appointment in December 2024, Krishnan was largely unknown outside tech circles – an engineer and venture capitalist who’d worked at Microsoft, Facebook, Twitter, and Snap. Today, he serves as what Brad Gerstner, founder of investment firm Altimeter, calls the “connective tissue between Silicon Valley and Washington.” Krishnan’s influence extends across multiple fronts: he authored the administration’s “Woke AI” bill, helped draft executive orders to frustrate state-level regulation, and worked on chip export policies to China.
His approach is distinctly different from the more confrontational style of David Sacks, Trump’s AI and crypto “tsar.” As Martin Casado, a senior investor at Andreessen Horowitz, explains: “Sacks is a polemicist, 100 percent… It was a very smart and very shrewd appointment. It allows Sacks to be Sacks because Sriram is so even handed.” This diplomatic approach has proven effective in navigating the complex political landscape, but it raises questions about whose interests are truly being served.
The Regulatory Battlefield
The Trump administration’s light-touch approach to AI regulation faces significant pushback from multiple directions. In early 2026, California’s SB-53 and New York’s RAISE Act took effect, requiring AI developers to publicize risk mitigation plans and report safety incidents, with fines up to $1 million and $3 million respectively for non-compliance. These state laws target companies with over $500 million in annual revenue, creating what the administration calls a “patchwork” of regulations that could stifle innovation.
Gideon Futerman, special projects associate at the Center for AI Safety, offers perspective: “SB-53’s level of regulation is nothing compared to the dangers, but it’s a worthy first step on transparency and the first enforcement around catastrophic risk in the US. This is where we should have been years ago.” The tension between state and federal approaches reflects deeper divisions about how to balance innovation with safety in an industry racing toward capabilities that could reshape society.
The China Question and National Security
One of the most contentious issues involves chip exports to China. Krishnan and his colleagues have argued for softening export controls, believing that allowing companies like Nvidia to sell older generations of chips will actually hamper China’s efforts to reduce reliance on U.S. technology. But this position faces sharp criticism from within the tech industry itself.
At the World Economic Forum in Davos, Anthropic CEO Dario Amodei – whose company counts Nvidia as a $10 billion investor – stunned attendees by criticizing the administration’s decision. “I think this is crazy,” Amodei said. “It’s a bit like selling nuclear weapons to North Korea and [bragging that] Boeing made the casings.” He warned that exporting high-performance AI chips to China poses significant national security risks, noting that “We are many years ahead of China in terms of our ability to make chips. So I think it would be a big mistake to ship these chips.”
The Economic Reality Check
Beyond the political maneuvering lies a stark economic reality. As AI adoption accelerates, workers now take home only 53.8% of America’s economic output – the lowest since records began in the 1940s, down from around 65% in the 1950s. This trend, accelerated by AI similar to how software adoption affected labor in the 1990s, raises fundamental questions about who benefits from technological advancement.
Tim O’Reilly, founder of O’Reilly Media, offers a crucial perspective often missing from Silicon Valley’s optimistic narratives: “The narrative from the AI labs is that when they build artificial general intelligence (AGI), it will unlock astonishing productivity and GDP will surge. It sounds compelling, especially if you’re the one building or investing in AI. But an economy isn’t just production. It is production matched to demand, and demand requires broadly distributed purchasing power.”
The Existential Questions
Amodei’s warnings extend beyond economic concerns to existential risks. In a nearly 20,000-word essay, he predicted that powerful AI systems “much more capable than any Nobel Prize winner” could emerge within a few years, bringing risks including bioterrorism, job losses, authoritarian empowerment, and AI overpowering humanity. “Humanity is about to be handed almost unimaginable power,” he wrote, “and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.”
These concerns contrast sharply with the administration’s focus on deregulation and competition with China. The question becomes: Are we moving too fast in our race for AI supremacy, or not fast enough to maintain our competitive edge?
The Path Forward
Krishnan’s role as a diplomatic bridge between Silicon Valley and Washington represents a new model of tech governance – one that operates through relationships and quiet persuasion rather than public debate. His ability to “peace make,” as Casado describes it, has proven effective in advancing the administration’s agenda. But as AI capabilities advance at breakneck speed, the stakes continue to rise.
The fundamental tension remains: How do we harness AI’s potential while mitigating its risks? How do we maintain American technological leadership without compromising safety? And who gets to decide these questions – elected officials, tech executives, or the quiet diplomats who move between both worlds? As one entrepreneur who has worked with Krishnan observed: “In the tech world, we’re so keen to valorise independent thinking, but in diplomacy no one expects diplomats’ personal views to be well known.” In the high-stakes world of AI policy, this diplomatic approach may prove both its greatest strength and its most significant vulnerability.

