Artwork

Kandungan disediakan oleh Daniel Filan. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Daniel Filan atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

39 - Evan Hubinger on Model Organisms of Misalignment

1:45:47
 
Kongsi
 

Manage episode 452931639 series 2844728
Kandungan disediakan oleh Daniel Filan. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Daniel Filan atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycophancy to Subterfuge".

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/12/01/episode-39-evan-hubinger-model-organisms-misalignment.html

Topics we discuss, and timestamps:

0:00:36 - Model organisms and stress-testing

0:07:38 - Sleeper Agents

0:22:32 - Do 'sleeper agents' properly model deceptive alignment?

0:38:32 - Surprising results in "Sleeper Agents"

0:57:25 - Sycophancy to Subterfuge

1:09:21 - How models generalize from sycophancy to subterfuge

1:16:37 - Is the reward editing task valid?

1:21:46 - Training away sycophancy and subterfuge

1:29:22 - Model organisms, AI control, and evaluations

1:33:45 - Other model organisms research

1:35:27 - Alignment stress-testing at Anthropic

1:43:32 - Following Evan's work

Main papers:

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training: https://arxiv.org/abs/2401.05566

Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models: https://arxiv.org/abs/2406.10162

Anthropic links:

Anthropic's newsroom: https://www.anthropic.com/news

Careers at Anthropic: https://www.anthropic.com/careers

Other links:

Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research: https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1

Simple probes can catch sleeper agents: https://www.anthropic.com/research/probes-catch-sleeper-agents

Studying Large Language Model Generalization with Influence Functions: https://arxiv.org/abs/2308.03296

Stress-Testing Capability Elicitation With Password-Locked Models [aka model organisms of sandbagging]: https://arxiv.org/abs/2405.19550

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

50 episod

Artwork
iconKongsi
 
Manage episode 452931639 series 2844728
Kandungan disediakan oleh Daniel Filan. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Daniel Filan atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycophancy to Subterfuge".

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/12/01/episode-39-evan-hubinger-model-organisms-misalignment.html

Topics we discuss, and timestamps:

0:00:36 - Model organisms and stress-testing

0:07:38 - Sleeper Agents

0:22:32 - Do 'sleeper agents' properly model deceptive alignment?

0:38:32 - Surprising results in "Sleeper Agents"

0:57:25 - Sycophancy to Subterfuge

1:09:21 - How models generalize from sycophancy to subterfuge

1:16:37 - Is the reward editing task valid?

1:21:46 - Training away sycophancy and subterfuge

1:29:22 - Model organisms, AI control, and evaluations

1:33:45 - Other model organisms research

1:35:27 - Alignment stress-testing at Anthropic

1:43:32 - Following Evan's work

Main papers:

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training: https://arxiv.org/abs/2401.05566

Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models: https://arxiv.org/abs/2406.10162

Anthropic links:

Anthropic's newsroom: https://www.anthropic.com/news

Careers at Anthropic: https://www.anthropic.com/careers

Other links:

Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research: https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1

Simple probes can catch sleeper agents: https://www.anthropic.com/research/probes-catch-sleeper-agents

Studying Large Language Model Generalization with Influence Functions: https://arxiv.org/abs/2308.03296

Stress-Testing Capability Elicitation With Password-Locked Models [aka model organisms of sandbagging]: https://arxiv.org/abs/2405.19550

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

50 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main