Artwork

Kandungan disediakan oleh Foresight Institute. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Foresight Institute atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

Jan Leike | Superintelligent Alignment

9:57
 
Kongsi
 

Manage episode 382006351 series 2943147
Kandungan disediakan oleh Foresight Institute. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Foresight Institute atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.


Key Highlights

  • The launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.
  • The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.
  • How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.
  • Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.
  • Experimentation with intentionally misaligned models to evaluate alignment strategies.

Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:

Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

137 episod

Artwork
iconKongsi
 
Manage episode 382006351 series 2943147
Kandungan disediakan oleh Foresight Institute. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Foresight Institute atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.


Key Highlights

  • The launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.
  • The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.
  • How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.
  • Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.
  • Experimentation with intentionally misaligned models to evaluate alignment strategies.

Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:

Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

137 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas