Trump’s Big, Beautiful L.L.M.
Quoted in the article
The president’s grandly displayed A.I. Action Plan is long on frameworks but notably short on policy—a Rorschach test for fans and critics alike, revealing what researchers, scientists, and Big Tech companies want this young trillion-dollar industry to become (or avoid) as it matures at warp speed.
Among the initial flurry of executive actions that President Trump issued in late January, upon his return to the Oval Office, was an order rescinding Joe Biden’s 2023 executive order to ensure the “safe, secure” development of artificial intelligence. In its place, Trump called for the removal of “barriers to American leadership” in A.I. and promised his own set of rules. Now, after six months of waiting—and the publication of over 10,000 public comments—Trump’s so-called “A.I. Action Plan” has arrived. On Wednesday, the president delivered his big A.I. speech and signed a trio of executive orders at an event co-hosted by the Hill and Valley Forum and the All-In guys (David Sacks is the president’s A.I. czar, after all), accompanied by the publication of a 28-page memo, titled “Winning the Race.”
The plan, perhaps unsurprisingly, is loaded with Silicon Valley talking points and closely mirrors OpenAI’s own proposal for less regulation (specifically, a preemption of state regulation); fewer export controls for allies (and more restrictions for adversaries); a streamlined permitting process for data centers; and an “ambitious” government adoption strategy, among other things. On the copyright front, Trump sent jitters through the entertainment industry by declaring in his speech that A.I. companies shouldn’t “have to make deals with every content provider,” an impression notably absent from the plan itself.
It’s a big win for the so-called “accelerationists” in the industry. Among its key pillars, the plan calls for the U.S. to establish a “dynamic, ‘try-first’ culture for A.I. across American industry,” especially in “critical” sectors, such as healthcare, and for A.I. to be adopted across all federal agencies, particularly the Department of Defense, which recently signed separate $200 million defense contracts with xAI, OpenAI, Anthropic, and Google. To support all that growth, the plan calls for a stronger electric grid, more U.S.-made semiconductor chips, and higher security standards for data centers. The streamlining of the permitting process, alongside the allocation of federal lands for data centers, was cemented in one of the three executive orders Trump signed last night. (Another order relates to the export of A.I. technology to allies.)
Dr. Sarah Myers West, the co-executive director of the independent research institute AI Now, told me that the plan “reads like the wish list for Silicon Valley’s big A.I. firms.” OpenAI didn’t return a request for comment on the plan, but the company seems suitably pleased with the result—at least according to a LinkedIn post from chief policy officer Chris Lehane, the former Democratic consultant turned tech regulatory rainmaker, declaring that Trump’s A.I. plan will “fuel growth, opportunity, and innovation for everyone, everywhere.”
The Road to…
In fact, the A.I. Action Plan—which contains zero clear mandates, deadlines, or due dates—is really more of a policy recommendation “road map.” It vaguely states that A.I. companies “must be unencumbered by bureaucratic red tape” at both the state and federal levels, and that the federal government should “limit” A.I.-related funding to states with “burdensome” A.I. regulations. The definition of “burdensome,” of course, is unclear, especially since the plan also indicates that the federal government shouldn’t interfere with states’ rights to pass “prudent” A.I. legislation.
Naturally, criticism of the plan rolled in from a number of fronts. Dr. Myers West expressed particular concern about the plan’s potential to revive a sort of backdoor iteration of the A.I. state law preemption, a concern echoed by Jim Steyer’s Common Sense Media and a coalition of dozens of other organizations. She added that the push for rapid A.I. adoption in the military “is a really dangerous stance, because there are inherent vulnerabilities with these systems that ultimately can undermine the security of the very same national security infrastructure.”
The Climate Justice Alliance, meanwhile, said in a statement that “we need an A.I. plan that protects people and the planet, not a plan that accelerates extraction.” That’s in line with a recent open letter from the AI Now Institute—signed by dozens of other organizations, including the Climate Justice Alliance—that called for the development of a so-called People’s A.I. Action Plan. Others raised concerns with the plan’s efforts to prevent so-called “woke” A.I. models from operating within the federal government. One section of the plan, focused on free speech and the advancement of “American values” within A.I. models, called for the National Institute of Standards and Technology’s A.I. Risk Management Framework to “eliminate references to misinformation; diversity, equity, and inclusion; and climate change.” (Naturally…) An element of this was solidified in Trump’s third executive order for the evening, which called for “ideological neutrality,” though it’s hard to know what the impact of that would be, or how it would be quantified.
Other researchers were cautiously optimistic. Charlie Bullock, a senior research fellow at the Institute for Law & AI, is particularly excited about policy recommendations that call for federal evaluations of national security risks inherent in frontier models, advancements in the science of A.I. interpretability, and the general construction of an A.I. evaluation ecosystem. He also pointed to investments in biosecurity, and physical and cybersecurity more generally as it relates to A.I. hardware and software. Amazon, IBM, Meta, Hugging Face, and Google all welcomed the order, as did the Chamber of Commerce and Business Software Alliance.
Dr. Hamid Ekbia, the director of Syracuse University’s Autonomous Systems Policy Institute, told me he was both impressed and surprised by the breadth of the Action Plan’s contents. “Given the circumstances, this exceeded my expectations,” he said. In particular, he pointed to recommendations to create training programs for workers to integrate A.I. and to advance A.I.-enabled scientific innovations.
But he also raised plenty of flags—surrounding environmental impacts and the possible preemption of state A.I. laws, of course, but also the fact that recommending more than 100 policy actions would place an extraordinary burden on government agencies that might not be equipped to implement them. The N.I.S.T., for instance, was assigned 10 policy directives by the plan; the National Science Foundation was assigned 13; and the Center for A.I. Standards and Innovation was assigned 16. Meanwhile, the Trump administration has proposed a $325 million cut to N.I.S.T.’s discretionary budget, and a 55 percent, multibillion-dollar reduction to the N.S.F.’s budget. Perhaps the left hand had not spoken to the right.
Appetite for Regulation
The leitmotif of Trump’s plan and speech—descended from Silicon Valley and often echoed by the tech dudes vying for the president’s attention—is the belief that the only way to win the A.I. arms race is through the so-called freedom to innovate. But not only is it unclear what winning really means—discovering new physics without praising Hitler?—it’s also worth noting that this approach is markedly different from the strategy of our perceived adversary in this contest.
In China, strong, clear regulation has become a key component to its plans for A.I. leadership. “I don’t buy that the only way for us to compete is to do so in an unguarded, freewheeling sort of way,” said NYU’s Nick Reese, who served as the first director of emerging technology policy at the Department of Homeland Security. “Nothing in the world is as simple as that argument, If you just flip one switch—no regulation—then we win.” (He’s got a point.)
However, there’s at least some appetite in the federal government for increased regulation, as highlighted by a recent bipartisan legislative proposal that would certify creators’ rights to their copyrighted work. The big question, Reese told me, is “whether enough congressional members feel as though they can vote for something that is very clearly in conflict with a White House priority.”
That remains an open question for state regulators, as well. Amina Fazlullah, the head of tech advocacy policy for Common Sense Media, finds it “very possible that states will be worried that existing federal funding will be deemed as being related to supporting A.I. innovation,” which “could have a chilling effect on states” as far as regulation is concerned.
In the end, though, the vast majority of Americans want to see A.I. regulated. “We’re not giving up, that’s for sure,” Myers West told me. “We don’t have to accept this trajectory for A.I. as being inevitable, and there’s a lot we can do to reshape it in the broader public interest. The path forward is for all of us to work together to articulate this alternative vision for A.I. that’s going to benefit us all.”
View the article here.