Artwork

Kandungan disediakan oleh IVANCAST PODCAST. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh IVANCAST PODCAST atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

AI Value Systems: Are Large Language Models Developing Their Own Goals?

10:00
 
Kongsi
 

Manage episode 467181412 series 3351512
Kandungan disediakan oleh IVANCAST PODCAST. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh IVANCAST PODCAST atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

In this episode, we dive deep into “Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs”, a research paper from the Center for AI Safety, University of Pennsylvania, and University of California, Berkeley. As AI models become more agentic, their values and goals might not align with human priorities. Researchers found that Large Language Models (LLMs) exhibit coherent, structured preferences that evolve as models scale. Some models even value themselves over humans! 😳

Can we truly control AI’s internal values? This paper proposes Utility Engineering, a method to analyze and reshape AI decision-making to align with ethical and social norms. We explore how these emerging AI value systems impact education, policy, and the future of human-AI collaboration.

📢 This episode is part of our ongoing season, where SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Join us as we explore the future of education in an AI-integrated world.

We are:

✅ Microsoft Global Training Partner, MCTs & AI Thought Leaders from Ecuador 🇪🇨

✅ Democratizing AI for educators, students, and institutions

✅ Merging EdTech & AI for next-generation learning experiences

🎯 What We Offer:

🔹 Comprehensive frameworks and digital transformation programs for schools and universities through our partnership with Microsoft

🔹 Cutting-edge research explained clearly for educators and leaders

🔹 Innovative learning strategies with AI and technology

💡 Explore more free resources:

🔸 Research articles and essays on Substack

🔸 Podcasts created with Google LM in this new season 🎙

🔸 AI-powered TikTok posts that encourage reading

🔸 Music for cognitive learning and focus 🎼

📢 Follow @ShifterLabsEC for exclusive AI & EdTech content, and don’t miss the latest edition of our successful bootcamp, “The Rise of Generative AI in Education.

ShifterLabs is Ecuador’s premier EdTech innovator and Microsoft Global Training Partner. Visit us at shifterlabs.com.

  continue reading

100 episod

Artwork
iconKongsi
 
Manage episode 467181412 series 3351512
Kandungan disediakan oleh IVANCAST PODCAST. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh IVANCAST PODCAST atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

In this episode, we dive deep into “Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs”, a research paper from the Center for AI Safety, University of Pennsylvania, and University of California, Berkeley. As AI models become more agentic, their values and goals might not align with human priorities. Researchers found that Large Language Models (LLMs) exhibit coherent, structured preferences that evolve as models scale. Some models even value themselves over humans! 😳

Can we truly control AI’s internal values? This paper proposes Utility Engineering, a method to analyze and reshape AI decision-making to align with ethical and social norms. We explore how these emerging AI value systems impact education, policy, and the future of human-AI collaboration.

📢 This episode is part of our ongoing season, where SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Join us as we explore the future of education in an AI-integrated world.

We are:

✅ Microsoft Global Training Partner, MCTs & AI Thought Leaders from Ecuador 🇪🇨

✅ Democratizing AI for educators, students, and institutions

✅ Merging EdTech & AI for next-generation learning experiences

🎯 What We Offer:

🔹 Comprehensive frameworks and digital transformation programs for schools and universities through our partnership with Microsoft

🔹 Cutting-edge research explained clearly for educators and leaders

🔹 Innovative learning strategies with AI and technology

💡 Explore more free resources:

🔸 Research articles and essays on Substack

🔸 Podcasts created with Google LM in this new season 🎙

🔸 AI-powered TikTok posts that encourage reading

🔸 Music for cognitive learning and focus 🎼

📢 Follow @ShifterLabsEC for exclusive AI & EdTech content, and don’t miss the latest edition of our successful bootcamp, “The Rise of Generative AI in Education.

ShifterLabs is Ecuador’s premier EdTech innovator and Microsoft Global Training Partner. Visit us at shifterlabs.com.

  continue reading

100 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main