🤖 A Quick Q&A with ... Mark Jamison on a pro-market vision for AI/tech policy
'The real AI productivity boost will come when AI leads to the reorganization of companies and economies.'
The Biden Administration may have been heavy handed in its attempts to initiate AI regulation, but Republicans don’t seem to have a better strategy, as my AEI colleague Mark Jamison points out in his recent blog post, “A Vision for Tech Policy is Missing from GOP Economic Plans.” To find out how the US should be walking the fine line between overregulation and recklessness in the face of this powerful new technology, I asked Jamison a few quick questions.
Jamison is a nonresident senior fellow at the American Enterprise Institute, where he focuses on telecommunications and Federal Communications Commission issues, as well as the effect of technology on the economy. He is concurrently the director and Gunter Professor of the Public Utility Research Center at the University of Florida’s Warrington College of Business.
1/ What do you see as the strongest case that AI can meaningfully accelerate economic growth?
AI can accelerate economic growth by saving labor, enabling better products, or both, i.e., enabling people to produce more value for every hour they work. For example, Bloomberg’s BloombergGPT (launched in 2023) and JP Morgan’s IndexGPT (introduced in 2024) can save labor in providing more client services with fewer employees but can also provide better service by quickly drawing upon a broader range of resources and ideas than can a single person. Likewise, AI can speed up processing of medical images. Interestingly, AI tends to make its biggest positive impacts on the work of people with lower-than-average abilities but can decrease the effectiveness of the most highly skilled. For example, AI improves the quality and volume of output for marginally capable attorneys, taking them up to the level of an average attorney. But for the very best attorneys, AI can get in their way because it predicts what the mass of attorneys would do.
As we saw with the introduction of computers decades ago, the real AI productivity boost will come when AI leads to the reorganization of companies and economies. Organizations flattened with the use of computers because hierarchies became less efficient means of managing information. Computers increased economies of scale for some companies, leading them to grow larger, but decreased scale economies for others, leading to outsourcing and gig economies. AI changes the economics of decision-making by training on data from trillions of decisions around the world. As people learn to work well with these decision-making machines, work will become more decentralized, and coordination will become more complex. The economies that embrace this adaptive challenge will be the ones that roar ahead.
2/ How should we think about regulating this fast-evolving technology?
It is wrong to think about regulating AI. That isn’t to say that the use of AI won’t create new regulatory challenges, but that the challenges are created by the uses, not the AI. Today’s AI does little more than predict what will happen when a particular action is taken. The AI in a self-driving Tesla, for example, predicts what will happen if the car moves a particular direction at a certain speed in a specific context. Tesla and the driver jointly decide what the AI is allowed to do with this prediction. The use decision is one that causes great things or terrible things to happen.
It should be no surprise, then, that I have disagreements with both the EU’s regulatory approach and the Biden administration’s AI policy. The EU intends to control where AI is used based on beliefs about risk and beliefs about fundamental human rights. Special restrictions are on medical applications, for example, because they are considered high-risk. But there will be many instances where AI lowers the cost of medical care, which will make healthcare more widely available, or improves quality by providing higher-quality predictions than humans. Any regulatory emphasis in this space should be on the quality of medical decisions, regardless of how they are made.
The Biden administration’s tortuous 117-page executive order on AI directs the development of AI guidelines, evaluation methods, and the like spread across numerous agencies, for purposes of ensuring that AI is safe, just, secure, and responsible, and advances equity and civil rights. This quagmire ensures that large, well-financed businesses that work well with politicians and government agencies have significant advantages over entrepreneurs. The approach also ensures that innovation moves slowly because government institutions, by their nature, prioritize avoiding negative headlines over better products and processes. The Biden approach is not the stuff that dynamic economies are made of.
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.