On today’s occurrence of Decoder, we’re going to effort and fig retired “digital god.” I figured we’ve been doing this agelong enough, let’s conscionable get aft it. Can we physique an artificial quality truthful almighty that it changes the satellite and answers each of our questions? The AI manufacture has decided the reply is yes.
In September, OpenAI’s Sam Altman published a blog post claiming we’ll person superintelligent AI successful “a fewer 1000 days.” And earlier this month, Dario Amodei, the CEO of OpenAI rival Anthropic, published a 14,000-word post laying retired what precisely helium thinks specified a strategy volition beryllium susceptible of erstwhile it arrives, which helium says could beryllium arsenic soon arsenic 2026.
What’s fascinating is that the visions laid retired successful some posts are truthful akin — they some committedness melodramatic superintelligent AI that volition bring monolithic improvements to work, to subject and healthcare, and adjacent to ideology and prosperity. Digital god, baby.
But portion the visions are similar, the companies are, in many ways, openly opposed: Anthropic is the archetypal OpenAI defection story. Dario and a cohort of chap researchers near OpenAI successful 2021 aft becoming acrophobic with its progressively commercialized absorption and attack to safety, and they created Anthropic to beryllium a safer, slower AI company. And the accent was truly connected information until recently; conscionable past year, a major New York Times profile of the company called it the “white-hot halfway of A.I. doomerism.”
But the motorboat of ChatGPT, and the generative AI roar that followed, kicked disconnected a colossal tech arms race, and now, Anthropic is arsenic overmuch successful the crippled arsenic anyone. It’s taken successful billions successful funding, mostly from Amazon, and built Claude, a chatbot and connection exemplary to rival OpenAI’s GPT-4. Now, Dario is penning agelong blog posts astir spreading ideology with AI.
So what’s going connected here? Why is the caput of Anthropic abruptly talking truthful optimistically astir AI, erstwhile helium was antecedently known for being the safer, slower alternate to the progress-at-all-costs OpenAI? Is this conscionable much AI hype to tribunal investors? And if AGI is truly astir the corner, however are we adjacent measuring what it means for it to beryllium safe?
To interruption it each down, I brought connected Verge elder AI newsman Kylie Robison to sermon what it means, what’s going connected successful the industry, and whether we tin spot these AI leaders to archer america what they truly think.
If you’d similar to work much astir immoderate of the quality and topics we discussed successful this episode, cheque retired the links below:
- Machines of Loving Grace | Dario Amodei
- The Intelligence Age | Sam Altman
- Anthropic’s CEO thinks AI volition pb to a utopia | The Verge
- AI manifestos flood the tech portion | Axios
- OpenAI conscionable raised $6.6 cardinal to physique ever-larger AI models | The Verge
- OpenAI was a probe laboratory — present it’s conscionable different tech institution | The Verge
- Anthropic’s latest AI update tin usage a machine connected its ain | The Verge
- Agents are the aboriginal AI companies committedness — and desperately request | The Verge
- California politician vetoes large AI information measure | The Verge
- Inside the white-hot halfway of AI doomerism | NYT
- Microsoft and OpenAI’s adjacent concern shows signs of fraying | NYT
- The $14 cardinal question dividing OpenAI and Microsoft | WSJ
- Anthropic has floated $40 cardinal valuation successful backing talks | The Information
Decoder with Nilay Patel /
A podcast from The Verge astir large ideas and different problems.