Normal Policy Tools for a Normal Technology
Summary
Assad Ramzan Ali urges policymakers to treat AI as a “normal” technology, breaking it into components and applying proven policy solutions, such as by addressing cloud market concentration where Amazon, Microsoft, and Google control two-thirds of the market through exclusionary practices
SESSION Transcript
I am Assad Ramzan Ali. I am at the Vanderbilt Policy Accelerator, which is at Vanderbilt University. I think everyone here is on board with the idea that AI is powerful. As Arvind Narayanan and Sayash Kapoor have said, AI is also normal. And whether or not you agree with that view, I do. I think it's a useful starting point to think about policy questions and what the implications are if you go down that path a little bit. So let me start with this: We've heard a lot about our global competition, our geopolitical competition with China and what it means to win. I think a lot of the assumed answers look like bigger data centers for our current model types. In my view, we should look at how we've won geopolitical competitions in the past and what our technological advantage comes from. To me, that's a few things I've listed up here. It's R&D, it's immigration, it's startups, and it's access to technology. And what I'll talk a little bit more about is it's the American tradition of regulated capitalism.
Let me say a word on R&D. Last year, R&D dropped 10%. This is non-defense federal funding of R&D across all agencies. That same week that President Biden signed that appropriations bill into law, the PRC announced an increase of 10%. And so while we're decreasing R&D now—this is across the board, it's not just AI—that's where a lot of our groundbreaking stuff comes from. So think about that for a minute on what it means for our long-term competition in tech and what we can do.
Many of you have seen some version of an AI tech stack. There's a million versions out there. This is one I've put together that looks roughly like the others, where you have chips at the bottom, then cloud, then models, applications—you complicate it in some ways or another on the sides. But roughly, these are the companies we're talking about. I'll come back to this, but the reason I wanted to put this up here now is it's a useful way to start with a big problem and break it down. Because in AI policy, I think we sometimes get stuck on one thing or another when I think the answer is we actually have to look at a lot of things, and that's okay. So when you look layer by layer, it shows a bunch of different problems. I think a lot of what we've been focused on are the top and the bottom. So we've talked about chips, we've talked about deepfakes—great. But there's stuff in the middle, and I think that's where a layer-by-layer analysis can help. On the right, I've listed a bunch of known policy types, and this is where the normal framing can help. What's worked in the past with powerful technologies? What are the types of solutions that are out there, regulatory or otherwise? So this is just a little bit of a brainstorm, and a lot of it is based on work of re-energizing the study of regulating networks, platforms, and utilities, which a lot of my colleagues are working on.
To go back to this layer, one thing that I think is being ignored is the cloud layer. So in the stack, you see the cloud layer, the second to the bottom there. And one thing that sticks out once you start to look at that is Amazon, Microsoft, and Google have two-thirds of market share, and they've started to vertically integrate. So they design their own chips, they build their own models, they have their own applications and control access to applications, but they also invest in companies up and down that stack. They also have revenue-sharing partnerships, they also have exclusivity agreements that all further vertically integrate. This isn't theoretical. We actually see harm where cloud companies are prioritizing resources for companies they've invested in over startups. So that's the kind of economic harm that we should, as a policy community, want to avoid. To me, there are real-world solutions for this kind of problem, and we've seen them before. I'm working on a paper right now—people have talked about what policy solutions at the cloud layer look like, what regulations look like. And what I'm working on is tactically how would you apply those, what are the trade-offs and the different ideas like interoperability or non-discrimination or structural separation.
The other thing I want to put out here is the tech stack idea is useful applying backwards as well. So the railroad stack during the second industrial innovation, during the second industrial boom that we had, the railroad stack looked a little bit like this: where you have coal at the bottom that looks a little bit like chips, you have railroads, which looks a little bit like cloud computing. Then you have industrial plants like steel mills that use the coal—that looks a little bit like AI models—and then you have mass manufacturing of goods that starts to happen at the beginning of the 20th century. This is a useful way to think about how a powerful technology, one that fundamentally changed American geography and commerce and communication and a lot of society, how that can be normal. And think about the policy tools that were applied. And so when I think about that example, at the dawn of the 20th century, the railroad companies had 90% of the market in coal. So that's the vertical integration you see. And that's why states started to pass laws on pricing non-discrimination, and then the federal government did. So the thing I want to leave you all with is the way to think about AI is to break it down into its component parts, then to look at the policy problems that exist today and apply policy solutions that we've seen work in the past. There's a lot of other things we can and should be doing, but this is a framework to think about applying policy to a normal technology, in my view. Thank you.
Let me say a word on R&D. Last year, R&D dropped 10%. This is non-defense federal funding of R&D across all agencies. That same week that President Biden signed that appropriations bill into law, the PRC announced an increase of 10%. And so while we're decreasing R&D now—this is across the board, it's not just AI—that's where a lot of our groundbreaking stuff comes from. So think about that for a minute on what it means for our long-term competition in tech and what we can do.
Many of you have seen some version of an AI tech stack. There's a million versions out there. This is one I've put together that looks roughly like the others, where you have chips at the bottom, then cloud, then models, applications—you complicate it in some ways or another on the sides. But roughly, these are the companies we're talking about. I'll come back to this, but the reason I wanted to put this up here now is it's a useful way to start with a big problem and break it down. Because in AI policy, I think we sometimes get stuck on one thing or another when I think the answer is we actually have to look at a lot of things, and that's okay. So when you look layer by layer, it shows a bunch of different problems. I think a lot of what we've been focused on are the top and the bottom. So we've talked about chips, we've talked about deepfakes—great. But there's stuff in the middle, and I think that's where a layer-by-layer analysis can help. On the right, I've listed a bunch of known policy types, and this is where the normal framing can help. What's worked in the past with powerful technologies? What are the types of solutions that are out there, regulatory or otherwise? So this is just a little bit of a brainstorm, and a lot of it is based on work of re-energizing the study of regulating networks, platforms, and utilities, which a lot of my colleagues are working on.
To go back to this layer, one thing that I think is being ignored is the cloud layer. So in the stack, you see the cloud layer, the second to the bottom there. And one thing that sticks out once you start to look at that is Amazon, Microsoft, and Google have two-thirds of market share, and they've started to vertically integrate. So they design their own chips, they build their own models, they have their own applications and control access to applications, but they also invest in companies up and down that stack. They also have revenue-sharing partnerships, they also have exclusivity agreements that all further vertically integrate. This isn't theoretical. We actually see harm where cloud companies are prioritizing resources for companies they've invested in over startups. So that's the kind of economic harm that we should, as a policy community, want to avoid. To me, there are real-world solutions for this kind of problem, and we've seen them before. I'm working on a paper right now—people have talked about what policy solutions at the cloud layer look like, what regulations look like. And what I'm working on is tactically how would you apply those, what are the trade-offs and the different ideas like interoperability or non-discrimination or structural separation.
The other thing I want to put out here is the tech stack idea is useful applying backwards as well. So the railroad stack during the second industrial innovation, during the second industrial boom that we had, the railroad stack looked a little bit like this: where you have coal at the bottom that looks a little bit like chips, you have railroads, which looks a little bit like cloud computing. Then you have industrial plants like steel mills that use the coal—that looks a little bit like AI models—and then you have mass manufacturing of goods that starts to happen at the beginning of the 20th century. This is a useful way to think about how a powerful technology, one that fundamentally changed American geography and commerce and communication and a lot of society, how that can be normal. And think about the policy tools that were applied. And so when I think about that example, at the dawn of the 20th century, the railroad companies had 90% of the market in coal. So that's the vertical integration you see. And that's why states started to pass laws on pricing non-discrimination, and then the federal government did. So the thing I want to leave you all with is the way to think about AI is to break it down into its component parts, then to look at the policy problems that exist today and apply policy solutions that we've seen work in the past. There's a lot of other things we can and should be doing, but this is a framework to think about applying policy to a normal technology, in my view. Thank you.