Artwork

Kandungan disediakan oleh Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

AI Safety with Shazeda Ahmed

57:06
 
Kongsi
 

Manage episode 411571458 series 2828065
Kandungan disediakan oleh Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?

Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Support the show

Patreon | patreon.com/overthinkpodcast
Website | overthinkpodcast.com
Instagram & Twitter | @overthink_pod
Email | dearoverthink@gmail.com
YouTube | Overthink podcast

  continue reading

112 episod

Artwork

AI Safety with Shazeda Ahmed

Overthink

146 subscribers

published

iconKongsi
 
Manage episode 411571458 series 2828065
Kandungan disediakan oleh Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Ellie Anderson, Ph.D. and David Peña-Guzmán, Ph.D., Ellie Anderson, and David Peña-Guzmán atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?

Check out the episode's extended cut here!


Nick Bostrom, Superintelligence
Adrian Daub, What Tech Calls Thinking
Virginia Eubanks, Automating Inequality
Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’”
Matthew Jones and Chris Wiggins, How Data Happened
William MacAskill, What We Owe the Future
Toby Ord, The Precipice
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality”
Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted”
Peter Singer, Animal Liberation
Amia Srinivisan, “Stop The Robot Apocalypse”

Support the show

Patreon | patreon.com/overthinkpodcast
Website | overthinkpodcast.com
Instagram & Twitter | @overthink_pod
Email | dearoverthink@gmail.com
YouTube | Overthink podcast

  continue reading

112 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas