Pergi ke luar talian dengan aplikasi Player FM !
Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 05, 2025 16:49 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 440570506 series 3320433
This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:
Ezra Karger on what superforecasters and experts think about existential risks
And if you're finding these highlights episodes valuable, please let us know by emailing [email protected].
Highlights:
- Luisa’s intro (00:00:00)
- Why we need forecasts about existential risks (00:00:26)
- Headline estimates of existential and catastrophic risks (00:02:43)
- What explains disagreements about AI risks? (00:06:18)
- Learning more doesn't resolve disagreements about AI risks (00:08:59)
- A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
- Cruxes about AI risks (00:15:17)
- Is forecasting actually useful in the real world? (00:18:24)
Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
110 episod
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 05, 2025 16:49 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 440570506 series 3320433
This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:
Ezra Karger on what superforecasters and experts think about existential risks
And if you're finding these highlights episodes valuable, please let us know by emailing [email protected].
Highlights:
- Luisa’s intro (00:00:00)
- Why we need forecasts about existential risks (00:00:26)
- Headline estimates of existential and catastrophic risks (00:02:43)
- What explains disagreements about AI risks? (00:06:18)
- Learning more doesn't resolve disagreements about AI risks (00:08:59)
- A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
- Cruxes about AI risks (00:15:17)
- Is forecasting actually useful in the real world? (00:18:24)
Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong
110 episod
Semua episod
×Selamat datang ke Player FM
Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.