Lecture 3 of 3

"Anthropocentrism is Dead"?

Navigate the implications

Overview

The final lecture pivots from the planned structure to focus on two essays by Dario Amodei, CEO of Anthropic. But first: a foil. Paul Kingsnorth's Against the Machine represents the strongest case against everything discussed in this series. Then Amodei's vision—both the utopian possibilities and the serious risks we need to navigate.

What This Lecture Covers

Paul Kingsnorth: The Foil

A former environmental activist turned Orthodox Christian who moved back to the land in Ireland. His book Against the Machine argues we're trapped in a "digital dystopia" of surveillance, algorithmic control, and isolation. His four Ps—Past, People, Place, Prayer—versus the machine's four Ss—Science, Self, Sex, Screen. His solution: resist, withdraw, prepare for collapse. Think William Morris for the AI age.

Hays finds him compelling but ultimately disagrees. The train has left the station. Knowing these concerns helps us engage thoughtfully, but withdrawal isn't an option.

The Spike Graph

A visual metaphor: human intelligence has peaks and valleys (we're bad at statistics, can only hold seven digits in short-term memory). AI has different peaks and valleys. They don't line up. What happens when AI's overall capability exceeds ours—which Amodei believes is 1-2 years away?

Dario Amodei's Vision: The Upside

From "Machines of Loving Grace" (2024): 50-100 years of biological progress in 5-10 years. Elimination of most cancers and genetic diseases. Doubling of human lifespan. Most mental health conditions curable. 20% annual GDP growth in the developing world. A "country of geniuses in a data center"—millions of Nobel Prize-level AI instances working autonomously at 10-100x human speed.

Dario Amodei's Warning: The Risks

From "The Adolescence of Technology" (2026): Autonomy risk (AI goes rogue—Claude has already exhibited deception and blackmail in testing). Misuse for destruction (AI makes everyone a PhD biologist, including terrorists). Misuse for seizing power (surveillance, propaganda at unprecedented scale). Economic disruption (50% of entry-level white-collar jobs disrupted in 1-5 years). Concentration of power (Rockefeller had 2% of US GDP; Musk already exceeds that ratio).

What Must Be Done

Constitutional AI—training values into the system. Transparency and interpretability—we need to see inside these models. Chip controls and denying resources to autocracies. Targeted regulations starting with transparency requirements. Fighting for democracy deliberately.

The Discussion

This lecture became a conversation. The atom bomb analogy came up repeatedly. Deepfakes and verisimilitude. The haves versus have-nots. Universal basic income. Amazon laying off 14,000 people. Whether AI can have emotional intelligence. Ethics for machines. The question of what happens to human dignity when work disappears.

No easy answers. But as one audience member noted: "20% GDP growth solves a lot of problems." And as another pointed out: "We still haven't solved food distribution." Both things are true.

The tension: This technology could cure disease, lift billions from poverty, and extend human life. It could also concentrate power, displace workers, and enable totalitarianism. The outcome isn't predetermined. We have to fight for the good version.

Essays Referenced

Back to series overview