13 Comments
User's avatar
Debbie Peters's avatar

This feels like a bet on everything going right at the same time.

But I think the more interesting part is why that’s even necessary.

AI isn’t just a software story anymore. It’s running into physical limits. Chips, energy, infrastructure.

When you start hearing about rebuilding the entire system instead of improving it, it usually means the current one can’t support what’s coming next, and that’s the part worth paying attention to.

Brad Gorman's avatar

I love these insights but they read like they are all written by AI. For example “aren’t becoming obsolete; they’re becoming force multipliers.”

Do you label content written with AI and is there any way for us to tell what is Peter’s voice and what is just Peter prompted?

Don Frerichs's avatar

Wow! Quite an expansive future you paint! However, the piece exemplifies what Ken Wilber would call the disaster of flatland applied to transformative technology: exterior metrics (compute, speed, valuation) are treated as the whole story, while interiority, meaning, ethics, and culture are invisible. And it exemplifies what eco-economists would call the growth fallacy: the assumption that exponential expansion of throughput is unambiguously good, with biophysical limits treated as engineering problems to overcome rather than as real constraints.

The most important question the article never asks is also the most important question of the decade: What is all this for, and at what cost to what?

Rich Carr's avatar

Vertical works. The recursive loop is real...AI designing chips, chips powering better AI, better AI designing better chips. The machine is building itself. Agreed. Exciting time for machines making machines and the tera-compute right in front of us.

The loop nobody's funding at the same pace: the human one. That's a scary f***** reality. A 12-hour chip design cycle means the engineer's value shifts entirely from execution to evaluation...which, the machine is also doing in it's own machine way. With people no longer designing the chip, they're deciding whether the chip the machine designed is the right chip to build. That's a different cognitive skill, and it doesn't compress with compute. It develops through practice, experience, and the kind of judgment that takes years to build.

Your advice to parents is the closest you get to naming it: "teach your kids to think in terms of leveraging superintelligence." That's right. But leveraging requires knowing what to ask for, evaluating what comes back, and having the judgment to know when the machine is optimizing for the wrong variable. Those are trainable skills. They're also the skills most education systems still aren't teaching and, on a public level, it isn't even on the radar.

The machine is building itself. The question is who's building the people who decide what it should build next.

Pawel Jozefiak's avatar

The CPU in 12 hours vs 90 days is the number that stays with me. Not because it's impressive, it is. Because the bottleneck shift is happening faster than most people are adjusting for.

I've been building products with an AI agent running autonomously. The velocity is real. The new constraint is curation: knowing which outputs are worth polishing versus which are noise. Execution speed isn't the bottleneck anymore.

Proprietary data and taste are the durable advantages. Companies still optimizing for execution speed already lost the relevant race.

SPARK's avatar

Dear Peter, Today I was watching a youtube video released by DamiLee called “Can Humans REALLY Leave Earth? [Interstellar Spaceship]”. I think that you would want to watch that. I let Alex know too. DamiLee is an architect but the channel is about interesting sci-fi topics. I couldn’t help but think of you and Moonshots and the Future Vision X-Prize when I watched it. I wrote a comment there letting them know about the Future Vision X-Prize. Also others here who are into sci-fi futures would love that episode DamiLee released today. They bring up extremely fascinating topics in their episode today that apply to future planning.

Oeste's avatar

Elon wants to go where Google already is 😉

Peter Michael Echols's avatar

Could someone please expand on the information in this 13f.info link. Is there a list of the companies? "Go to 13f.info, look up Leopold Aschenbrenner’s Situational Awareness Fund, and study his holdings. " What are his holdings?

sugar2cell's avatar

For systems that can receive, abundance feels like a promise.

For systems that cannot, it behaves like a threat.

Tom O’Connor's avatar

I like that Microsoft bought an OAI datacenter. Bet Microsoft got the datacenter for pennies on the dollar? Remember when Microsoft was given grief for not spending on datacenters?

Tom O’Connor's avatar

You do understand machines have been making machines since Henry Ford, Right? (or even earlier.

Mylo's avatar

How do you expect the Iran War to affect A.I. progress?

TROY R PETERSON's avatar

And the world shrugged... This is NHI, and it will produce its own version of UAP because humans won't have the capacity to figure out what/why/how AI is doing it. And yet, folks will reply, "yeah, I know, but I got a lot going on right now." It's the same response as the (soft) disclosure of UAP/NHI since 2017. Sigh... Oh look! There's a bunch of fish that got stranded in the harbor when the water disappeared!