Artwork

Kandungan disediakan oleh information labs and Information labs. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh information labs and Information labs atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

17:05
 
Kongsi
 

Manage episode 523574650 series 3480798
Kandungan disediakan oleh information labs and Information labs. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh information labs and Information labs atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

  continue reading

37 episod

Artwork
iconKongsi
 
Manage episode 523574650 series 3480798
Kandungan disediakan oleh information labs and Information labs. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh information labs and Information labs atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

  continue reading

37 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main