33 subscribers
Pergi ke luar talian dengan aplikasi Player FM !
38.6 - Joel Lehman on Positive Visions of AI
Manage episode 463021866 series 2844728
Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/01/24/episode-38_6-joel-lehman-positive-visions-of-ai.html
FAR.AI: https://far.ai/
FAR.AI on X (aka Twitter): https://x.com/farairesearch
FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch
The Alignment Workshop: https://www.alignment-workshop.com/
Topics we discuss, and timestamps:
01:12 - Why aligned AI might not be enough
04:05 - Positive visions of AI
08:27 - Improving recommendation systems
Links:
Why Greatness Cannot Be Planned: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237
We Need Positive Visions of AI Grounded in Wellbeing: https://thegradientpub.substack.com/p/beneficial-ai-wellbeing-lehman-ngo
Machine Love: https://arxiv.org/abs/2302.09248
AI Alignment with Changing and Influenceable Reward Functions: https://arxiv.org/abs/2405.17713
Episode art by Hamish Doodles: hamishdoodles.com
52 episod
Manage episode 463021866 series 2844728
Typically this podcast talks about how to avert destruction from AI. But what would it take to ensure AI promotes human flourishing as well as it can? Is alignment to individuals enough, and if not, where do we go form here? In this episode, I talk with Joel Lehman about these questions.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Transcript: https://axrp.net/episode/2025/01/24/episode-38_6-joel-lehman-positive-visions-of-ai.html
FAR.AI: https://far.ai/
FAR.AI on X (aka Twitter): https://x.com/farairesearch
FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch
The Alignment Workshop: https://www.alignment-workshop.com/
Topics we discuss, and timestamps:
01:12 - Why aligned AI might not be enough
04:05 - Positive visions of AI
08:27 - Improving recommendation systems
Links:
Why Greatness Cannot Be Planned: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237
We Need Positive Visions of AI Grounded in Wellbeing: https://thegradientpub.substack.com/p/beneficial-ai-wellbeing-lehman-ngo
Machine Love: https://arxiv.org/abs/2302.09248
AI Alignment with Changing and Influenceable Reward Functions: https://arxiv.org/abs/2405.17713
Episode art by Hamish Doodles: hamishdoodles.com
52 episod
Semua episod
×
1 38.7 - Anthony Aguirre on the Future of Life Institute 22:39

1 38.6 - Joel Lehman on Positive Visions of AI 15:28

1 38.5 - Adrià Garriga-Alonso on Detecting AI Scheming 27:41

1 38.4 - Shakeel Hashim on AI Journalism 24:14

1 38.3 - Erik Jenner on Learned Look-Ahead 23:46

1 39 - Evan Hubinger on Model Organisms of Misalignment 1:45:47

1 38.2 - Jesse Hoogland on Singular Learning Theory 18:18

1 38.1 - Alan Chan on Agent Infrastructure 24:48

1 38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems 22:42

1 37 - Jaime Sevilla on AI Forecasting 1:44:25

1 36 - Adam Shai and Paul Riechers on Computational Mechanics 1:48:27

1 35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization 2:17:24

1 34 - AI Evaluations with Beth Barnes 2:14:02

1 33 - RLHF Problems with Scott Emmons 1:41:24
Selamat datang ke Player FM
Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.