💡 5 Quick Questions for … cultural critic Christine Rosen on AI and American society
'We should make every effort possible to be clear-eyed about human behavior as we design AI tools to supplement, support, and in some cases replace human decision-making.'
Some self promotion: I have a book coming out on October 3. The Conservative Futurist: How To Create the Sci-Fi World We Were Promised is currently available for pre-order pretty much everywhere, including Amazon. I’m very excited about it!
America was once the world’s dream factory. We turned imagination into reality, from curing polio to landing on the Moon to creating the internet. And we were confident that more wonders lay just over the horizon: clean and infinite energy, a cure for cancer, computers and robots as humanity’s great helpers, and space colonies. (Also, of course, flying cars.) Science fiction, from The Jetsons to Star Trek, would become fact.
But as we moved into the late 20th century, we grew cautious, even cynical, about what the future held and our ability to shape it. Too many of us saw only the threats from rapid change. The year 2023 marks the 50th anniversary of the start of the Great Downshift in technological progress and economic growth, followed by decades of economic stagnation, downsized dreams, and a popular culture fixated on catastrophe: AI that will take all our jobs if it doesn’t kill us first, nuclear war, climate chaos, plague and the zombie apocalypse. We are now at risk of another half-century of making the same mistakes and pushing a pro-progress future into the realm of impossibility.
But American Enterprise Institute (AEI) economic policy expert and long-time CNBC contributor James Pethokoukis argues that there’s still hope. We can absolutely turn things around—if we the people choose to dream and act. How dare we delay or fail to deliver for ourselves and our children.
With groundbreaking ideas and sharp analysis, Pethokoukis provides a detailed roadmap to a fantastic future filled with incredible progress and prosperity that is both optimistic and realistic. Through an exploration of culture, economics, and history, The Conservative Futurist tells the fascinating story of what went wrong in the past and what we need to do today to finally get it right. Using the latest economic research and policy analysis, as well as insights from top economists, historians, and technologists, Pethokoukis reveals that the failed futuristic visions of the past were totally possible. And they still are. If America is to fully recover from the COVID-19 pandemic, take full advantage of emerging tech from generative AI to CRISPR to reusable rockets, and launch itself into a shining tomorrow, it must again become a fully risk-taking, future-oriented society. It’s time for America to embrace the future confidently, act boldly, and take that giant leap forward.
Quote of the Issue
“I classify all economies as low income because we are all poor by the standards of the future.” - Eli Dourado
5QQ
💡 5 Quick Questions for … cultural critic Christine Rosen on AI and American society
Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on American history, society and culture, technology and culture, and feminism. While artificial intelligence advancements have no shortage of skeptics and naysayers, Christine offers the kind of thoughtful concern about AI that optimists and enthusiasts should grapple with.
1/ AI might enable automated therapists or teachers that don't have the same flaws of impatience and lack of compassion human therapists and teachers might have. Isn't this a good thing?
When used in moderation and for specific things, such as the administrative work of initially screening patients or making appointments, AI could streamline many of the mundane tasks performed by therapists, which would largely be a good thing. However, much of the hype surrounding the use of mental health chatbots such as Woebot and Wysa encourages the idea that such technologies are legitimate replacements for in-person therapy, which they are not. Humans already suffer from automation bias, which can make us over-reliant on automated decision-making systems even when those systems yield poor solutions. Chatbots might exacerbate that bias, creating new challenges for patients in need of mental health support from a human therapist.
A human therapist sees a patient in his or her full context – their social lives, physical health, and as members of their communities – in a way a chatbot cannot. In-person therapy encourages empathy and the building of trust between patient and caregiver, as well as an understanding of what it means to be human. Replacing that unquantifiable connection with a chatbot would degrade rather than improve the patient/therapist experience.
2/ What would it look like for an AI regulatory framework to take virtue and values into account?
First, such a framework would need to begin by focusing on what ends we are pursuing in our use of AI, rather than merely addressing the means of enabling it. We are at a critical moment where we can craft reasonable guidelines for safety and risk that are not merely reactionary, but also not so draconian that they stifle innovation. Where is the use of AI appropriate and in what areas of society might we decide it doesn’t belong? As we have seen with sophisticated algorithms that now determine everything from a person’s eligibility for a loan to their eligibility for parole, an uncritical embrace of new technological power can produce unintended consequences that are sometimes at odds with the values of a free society. For example, we should be asking what values our use of AI in fields such as K-12 education promote before AI is in widespread use in classrooms, and asking if it is ethical to employ AI in warfare or domestic and international surveillance.
Second, we should question whether our use of AI will potentially harm the most vulnerable members of society: Children, the elderly, people with special needs, and any other group that might be targeted for unnecessary surveillance, for example. What biases might be present in AI design that might limit the freedom of individuals and groups, and are the designers of AI systems able to identify and fix them? One of the challenges of AI in its early forms is that its designers are often unable to explain how or why much of it works – which suggests that it will be that much more difficult to reverse-engineer AI-enabled systems that produce harmful effects. We should also be asking who will be held responsible if AI does harm people.
Finally, and most importantly, what risk does AI pose to the healthy functioning of democracy? In a political culture already awash with misinformation and polarization, how will AI-enabled platforms worsen or improve our ability to foster deliberation and thoughtful policymaking? How might AI-enabled tools undermine democratic norms, rules, and institutions?
These questions might seem too broad or existential as companies race to bring AI products to market, but failing to ask them now merely delays having to deal with the fallout later, perhaps when we have already experienced serious unintended consequences from our use of AI.
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.