The White House AI Action Plan
Summary
Mark Brakel discusses the White House AI Action Plan and critiques major AI companies' lobbying tactics in DC, recommending consistency between companies' public safety rhetoric and their lobbying activities.
SESSION Transcript
Hi everyone, my name is Mark Brakel, I'm the policy director at The Future of Life Institute. With that, turning to the US AI action plan, which is one of the things FLI works on. Quick introduction, so by the 22nd of July, president Trump is expected to announce an AI Action Plan.
And this won't be the be all and end all of all US AI policy, but it will set the general tone and direction. And in advance of this, the White House has run a public consultation to which over 8,000 people and entities have submitted their commentary.
This is my personal bias perspective, but I feel a serious action plan would contain some or all of the following things: Nominating a clear government entity that's responsible for oversight—this is actually the one thing that almost all of those 8,000 people agreed upon, that NIST, the National Institute of Standards and Technology in the US could be a pretty good entity for oversight. Mandatory testing for sort of chemical, biological, radiological, and nuclear risk. Immpact on the labor market monitoring—Anthropic’s contribution has some really great stuff on that.
Some measures on chip governance and compute governance. Tackling the issue of superhuman manipulation. A moratorium on self-replication and self-improvement. A shutdown mechanism for all models and AI incident reporting. And this is quite the wish list obviously. But given I only have three minutes remaining, that's not what I wanted to focus on today.
What I wanted to briefly highlight to this community and this group is what the corporate lobby teams of DeepMind, Anthropic, Microsoft, are focusing on. Because even if you work at one of those companies on the technical side, you might not be fully aware as to what the government relations teams are up to on a day-to-day basis.
Arguably, they're deploying three main strategies in DC. One of them is to distract. So, OpenAI has made a lot of noise around the tax credit for training new data center heating, ventilation, and air conditioning engineers, which clearly is an important issue to AI policy, but maybe not the most critical one that the world is facing right now.
And there's a lot of emphasis by the companies on the uptake of AI by the US government itself. So, do civil servants use ChatGPT or not? Also, an important issue, but maybe not the most critical one. So, distraction is one key tactic that the companies are deploying.
The second main tactic is exaggeration. Any government action by the United States is surrender to China, is basically the narrative that a lot of the companies will deploy in DC. And you see that explicitly in hearings, but you also see it through essays, like Dario Amodei’s, Machines of Loving Grace, which sort of has this undertone of, if we do anything, then China will win.
And I think the final and maybe most important tactic that the companies deploy is that of undermining any possible state action. They all talk about a fragmented regulatory environment. So, if you do a search across these 8,000 entries and you look at all the corporate entries, a lot of them will reference the risk of a fragmented regulatory environment.
And that is code for "states might take their own state actions", like we saw with SB-1047 in California or other state bills that take meaningful action. And the companies are trying to make sure that there's federal preemption, such that the companies undermine the abilities of states to act independently.
I'm not wanting to argue here that anything here is surprising, I think the companies have little other choice, given they act in the interest of their shareholders, so I think everyone's behaving as expected, but I think as an AI community, we should be very clear-eyed and demand some level of seriousness from the companies, if they talk about the needs to train more ventilation engineers in DC, and in podcasts, talk about the massive risks that this technology poses, it would be good to bring those two narratives a little closer together. And I think we should learn as scientists and academics from past failures when it came to tobacco regulation, or the climate change movement, and maybe try and be a little bit quicker in calling out hypocrisy on the part of the companies. Thank you.
And this won't be the be all and end all of all US AI policy, but it will set the general tone and direction. And in advance of this, the White House has run a public consultation to which over 8,000 people and entities have submitted their commentary.
This is my personal bias perspective, but I feel a serious action plan would contain some or all of the following things: Nominating a clear government entity that's responsible for oversight—this is actually the one thing that almost all of those 8,000 people agreed upon, that NIST, the National Institute of Standards and Technology in the US could be a pretty good entity for oversight. Mandatory testing for sort of chemical, biological, radiological, and nuclear risk. Immpact on the labor market monitoring—Anthropic’s contribution has some really great stuff on that.
Some measures on chip governance and compute governance. Tackling the issue of superhuman manipulation. A moratorium on self-replication and self-improvement. A shutdown mechanism for all models and AI incident reporting. And this is quite the wish list obviously. But given I only have three minutes remaining, that's not what I wanted to focus on today.
What I wanted to briefly highlight to this community and this group is what the corporate lobby teams of DeepMind, Anthropic, Microsoft, are focusing on. Because even if you work at one of those companies on the technical side, you might not be fully aware as to what the government relations teams are up to on a day-to-day basis.
Arguably, they're deploying three main strategies in DC. One of them is to distract. So, OpenAI has made a lot of noise around the tax credit for training new data center heating, ventilation, and air conditioning engineers, which clearly is an important issue to AI policy, but maybe not the most critical one that the world is facing right now.
And there's a lot of emphasis by the companies on the uptake of AI by the US government itself. So, do civil servants use ChatGPT or not? Also, an important issue, but maybe not the most critical one. So, distraction is one key tactic that the companies are deploying.
The second main tactic is exaggeration. Any government action by the United States is surrender to China, is basically the narrative that a lot of the companies will deploy in DC. And you see that explicitly in hearings, but you also see it through essays, like Dario Amodei’s, Machines of Loving Grace, which sort of has this undertone of, if we do anything, then China will win.
And I think the final and maybe most important tactic that the companies deploy is that of undermining any possible state action. They all talk about a fragmented regulatory environment. So, if you do a search across these 8,000 entries and you look at all the corporate entries, a lot of them will reference the risk of a fragmented regulatory environment.
And that is code for "states might take their own state actions", like we saw with SB-1047 in California or other state bills that take meaningful action. And the companies are trying to make sure that there's federal preemption, such that the companies undermine the abilities of states to act independently.
I'm not wanting to argue here that anything here is surprising, I think the companies have little other choice, given they act in the interest of their shareholders, so I think everyone's behaving as expected, but I think as an AI community, we should be very clear-eyed and demand some level of seriousness from the companies, if they talk about the needs to train more ventilation engineers in DC, and in podcasts, talk about the massive risks that this technology poses, it would be good to bring those two narratives a little closer together. And I think we should learn as scientists and academics from past failures when it came to tobacco regulation, or the climate change movement, and maybe try and be a little bit quicker in calling out hypocrisy on the part of the companies. Thank you.