🤖 What if Google really did just invent human-level AI?
Also: 5 Quick Questions for … Our World in Data’s Hannah Ritchie on the state of the global environment
In This Issue
The Essay: What if Google really did just invent human-level AI?
5QQ: 5 Quick Questions for … Our World in Data’s Hannah Ritchie on the state of the global environment
Micro Reads: automating ports, the future of physics, Saudi spending on healthspan
Quote of the Issue
“Technology is a gift of God. After the gift of life it is perhaps the greatest of God’s gifts. It is the mother of civilizations, of arts and of sciences.” —Freeman Dyson
The Essay
🤖 What if Google really did just invent human-level AI?
If someone claims to have developed or interacted with a sentient AI, the Sagan Standard — put forward by astronomer Carl Sagan regarding extraterrestrial contact — should immediately be applied: extraordinary claims require extraordinary evidence. And it’s unclear, to be generous, if Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, has provided such powerful proof.
To briefly recap various news reports: Lemoine is suggesting that the company’s LaMDA (Language Model for Dialogue Applications) chatbot can do far more than cleverly mimic human conversation. As quoted in a Washington Post profile by reporter Nitasha Tiku over the weekend, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.” Lemoine continued “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” And to Lemoine, LaMDA is a person.
Keep reading with a 7-day free trial
Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.