Artwork

Kandungan disediakan oleh The Nonlinear Fund. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh The Nonlinear Fund atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

LW - OpenAI #8: The Right to Warn by Zvi

52:45
 
Kongsi
 

Manage episode 424160940 series 3337129
Kandungan disediakan oleh The Nonlinear Fund. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh The Nonlinear Fund atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI #8: The Right to Warn, published by Zvi on June 17, 2024 on LessWrong. The fun at OpenAI continues. We finally have the details of how Leopold Aschenbrenner was fired, at least according to Leopold. We have a letter calling for a way for employees to do something if frontier AI labs are endangering safety. And we have continued details and fallout from the issues with non-disparagement agreements and NDAs. Hopefully we can stop meeting like this for a while. Due to jury duty and it being largely distinct, this post does not cover the appointment of General Paul Nakasone to the board of directors. I'll cover that later, probably in the weekly update. The Firing of Leopold Aschenbrenner What happened that caused Leopold to leave OpenAI? Given the nature of this topic, I encourage getting the story from Leopold by following along on the transcript of that section of his appearance on the Dwarkesh Patel Podcast or watching the section yourself. This is especially true on the question of the firing (control-F for 'Why don't I'). I will summarize, but much better to use the primary source for claims like this. I would quote, but I'd want to quote entire pages of text, so go read or listen to the whole thing. Remember that this is only Leopold's side of the story. We do not know what is missing from his story, or what parts might be inaccurate. It has however been over a week, and there has been no response from OpenAI. If Leopold's statements are true and complete? Well, it doesn't look good. The short answer is: 1. Leopold refused to sign the OpenAI letter demanding the board resign. 2. Leopold wrote a memo about what he saw as OpenAI's terrible cybersecurity. 3. OpenAI did not respond. 4. There was a major cybersecurity incident. 5. Leopold shared the memo with the board. 6. OpenAI admonished him for sharing the memo with the board. 7. OpenAI went on a fishing expedition to find a reason to fire him. 8. OpenAI fired him, citing 'leaking information' that did not contain any non-public information, and that was well within OpenAI communication norms. 9. Leopold was explicitly told that without the memo, he wouldn't have been fired. You can call it 'going outside the chain of command.' You can also call it 'fired for whistleblowing under false pretenses,' and treating the board as an enemy who should not be informed about potential problems with cybersecurity, and also retaliation for not being sufficiently loyal to Altman. Your call. For comprehension I am moving statements around, but here is the story I believe Leopold is telling, with time stamps. 1. (2:29:10) Leopold joined superalignment. The goal of superalignment was to find the successor to RLHF, because it probably won't scale to superhuman systems, humans can't evaluate superhuman outputs. He liked Ilya and the team and the ambitious agenda on an important problem. 1. Not probably won't scale. It won't scale. I love that Leike was clear on this. 2. (2:31:24) What happened to superalignment? OpenAI 'decided to take things in a somewhat different direction.' After November there were personnel changes, some amount of 'reprioritization.' The 20% compute commitment, a key part of recruiting many people, was broken. 1. If you turn against your safety team because of corporate political fights and thus decide to 'go in a different direction,' and that different direction is to not do the safety work? And your safety team quits with no sign you are going to replace them? That seems quite bad. 2. If you recruit a bunch of people based on a very loud public commitment of resources, then you do not commit those resources? That seems quite bad. 3. (2:32:25) Why did Leopold leave, they said you were fired, what happened? I encourage reading Leopold's exact answer and not take my word for this, but the short version i...
  continue reading

1697 episod

Artwork
iconKongsi
 
Manage episode 424160940 series 3337129
Kandungan disediakan oleh The Nonlinear Fund. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh The Nonlinear Fund atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI #8: The Right to Warn, published by Zvi on June 17, 2024 on LessWrong. The fun at OpenAI continues. We finally have the details of how Leopold Aschenbrenner was fired, at least according to Leopold. We have a letter calling for a way for employees to do something if frontier AI labs are endangering safety. And we have continued details and fallout from the issues with non-disparagement agreements and NDAs. Hopefully we can stop meeting like this for a while. Due to jury duty and it being largely distinct, this post does not cover the appointment of General Paul Nakasone to the board of directors. I'll cover that later, probably in the weekly update. The Firing of Leopold Aschenbrenner What happened that caused Leopold to leave OpenAI? Given the nature of this topic, I encourage getting the story from Leopold by following along on the transcript of that section of his appearance on the Dwarkesh Patel Podcast or watching the section yourself. This is especially true on the question of the firing (control-F for 'Why don't I'). I will summarize, but much better to use the primary source for claims like this. I would quote, but I'd want to quote entire pages of text, so go read or listen to the whole thing. Remember that this is only Leopold's side of the story. We do not know what is missing from his story, or what parts might be inaccurate. It has however been over a week, and there has been no response from OpenAI. If Leopold's statements are true and complete? Well, it doesn't look good. The short answer is: 1. Leopold refused to sign the OpenAI letter demanding the board resign. 2. Leopold wrote a memo about what he saw as OpenAI's terrible cybersecurity. 3. OpenAI did not respond. 4. There was a major cybersecurity incident. 5. Leopold shared the memo with the board. 6. OpenAI admonished him for sharing the memo with the board. 7. OpenAI went on a fishing expedition to find a reason to fire him. 8. OpenAI fired him, citing 'leaking information' that did not contain any non-public information, and that was well within OpenAI communication norms. 9. Leopold was explicitly told that without the memo, he wouldn't have been fired. You can call it 'going outside the chain of command.' You can also call it 'fired for whistleblowing under false pretenses,' and treating the board as an enemy who should not be informed about potential problems with cybersecurity, and also retaliation for not being sufficiently loyal to Altman. Your call. For comprehension I am moving statements around, but here is the story I believe Leopold is telling, with time stamps. 1. (2:29:10) Leopold joined superalignment. The goal of superalignment was to find the successor to RLHF, because it probably won't scale to superhuman systems, humans can't evaluate superhuman outputs. He liked Ilya and the team and the ambitious agenda on an important problem. 1. Not probably won't scale. It won't scale. I love that Leike was clear on this. 2. (2:31:24) What happened to superalignment? OpenAI 'decided to take things in a somewhat different direction.' After November there were personnel changes, some amount of 'reprioritization.' The 20% compute commitment, a key part of recruiting many people, was broken. 1. If you turn against your safety team because of corporate political fights and thus decide to 'go in a different direction,' and that different direction is to not do the safety work? And your safety team quits with no sign you are going to replace them? That seems quite bad. 2. If you recruit a bunch of people based on a very loud public commitment of resources, then you do not commit those resources? That seems quite bad. 3. (2:32:25) Why did Leopold leave, they said you were fired, what happened? I encourage reading Leopold's exact answer and not take my word for this, but the short version i...
  continue reading

1697 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas