Faster, Please!

Faster, Please!

Share this post

Faster, Please!
Faster, Please!
🎇📰 The New York Times vs AGI

🎇📰 The New York Times vs AGI

The newspaper's top tech reporter offers a lengthy timeline to human-level artificial 'general' intelligence — and a lesson in how media framing skews technological debates

James Pethokoukis's avatar
James Pethokoukis
May 20, 2025
∙ Paid
8

Share this post

Faster, Please!
Faster, Please!
🎇📰 The New York Times vs AGI
1
Share

My fellow pro-growth/progress/abundance Up Wingers,

Imagine: A top New York Times technology reporter writes an upbeat piece — maybe something like “Why We’re Likely to Get Artificial General Intelligence Sometime Soon” — that gives a contrarian take on widespread pessimism about reaching human-level AI. Such a piece might have first presented the dominant, negative narrative. Maybe something like this:

Critics of artificial intelligence's rapid advance have a compelling case: Current AI systems fundamentally differ from human cognition. They predict patterns in data rather than truly understand the world. Over 75% of respected AI researchers believe today's methods cannot lead to AGI. While machines excel at specific tasks like calculations or pattern recognition, they lack essential human abilities: handling novel situations, physical world understanding, recognizing subtleties like irony, and generating truly original ideas. Many experts argue we need at least one revolutionary breakthrough beyond neural networks to bridge this gap.

“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the A.I. start-up Cohere who previously worked as a researcher at Google and studied under the most revered A.I. researcher of the last 50 years. “What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That’s very different from what you and I do.”

Turning to the bright side

From there, the NYT piece shifts gears to outline the techno-optimist case. Among the key points:

  • AI CEO optimism. Sam Altman (OpenAI), Dario Amodei (Anthropic), and Elon Musk Tesla and xAI) all predict AGI within 1-4 years.

  • Scaling laws continue. Jared Kaplan, Anthropic chief science officer, notes, “There are all these trends where all of the limitations are going away" as AI follows predictable improvement patterns with increased data and computation.

  • Surprise performance leaps. Recall that AlphaGo stunned experts by winning a decade earlier than predicted, suggesting we consistently underestimate AI capabilities.

  • Proto-AGI already exists. Today's “jagged frontier” systems already surpass humans in complex domains like high-level math and coding, according to at least some tests.

  • Imminent Breakthrough Possible. The next innovation needed for AGI "could arrive tomorrow" given unprecedented research investment. Lots of cash is chasing AGI.

Back to reality

Now, as you may already suspect, what actually happened was that a top NYT tech reporter, Cade Metz, recently wrote a downbeat piece, “Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon,” that sought to provide a contrarian take on Silicon Valley optimism about reaching AGI sometime soon. That, after providing the positive case. Indeed, all the facts and commentary in my alternate NYT piece came from the real one.

A few thoughts:

Keep reading with a 7-day free trial

Subscribe to Faster, Please! to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 James Pethokoukis
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share