Artwork

Kandungan disediakan oleh John Willis. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh John Willis atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

S4 E21 - Erik J. Larson - The Myth of AI and Unravelling The Hype

1:04:25
 
Kongsi
 

Manage episode 440556139 series 3568163
Kandungan disediakan oleh John Willis. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh John Willis atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

In this episode of the Profound Podcast, I speak with Erik J. Larson, author of The Myth of Artificial Intelligence, about the speculative nature and real limitations of AI, particularly in relation to achieving Artificial General Intelligence (AGI). Larson delves into the philosophical and scientific misunderstandings surrounding AI, challenging the dominant narrative that AGI is just around the corner. Drawing from his expertise and experience in the field, Larson explains why much of the AI hype lacks empirical foundation. He emphasizes the limits of current AI models, particularly their reliance on inductive reasoning, which, though powerful, is insufficient for achieving human-like intelligence.

Larson discusses how the field of AI has historically blended speculative futurism with genuine technological advancements, often fueled by financial incentives rather than scientific rigor. He highlights how this approach has led to misconceptions about AI’s capabilities, especially in the context of AGI. Drawing connections to philosophical theories of inference, Larson introduces deductive, inductive, and abductive reasoning, explaining how current AI systems fall short in their over-reliance on inductive methods. The conversation touches on the challenges of abduction (the "broken" form of reasoning humans often use) and the difficulty of replicating this in AI systems.

Throughout the discussion, we explore the social and ethical implications of AI, including concerns about data limitations, the dangers of synthetic data, and the looming “data wall” that could hinder future AI progress. We also touch on broader societal impacts, such as how AI’s potential misuse and over-reliance might affect innovation and human intelligence.

  continue reading

71 episod

Artwork
iconKongsi
 
Manage episode 440556139 series 3568163
Kandungan disediakan oleh John Willis. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh John Willis atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

In this episode of the Profound Podcast, I speak with Erik J. Larson, author of The Myth of Artificial Intelligence, about the speculative nature and real limitations of AI, particularly in relation to achieving Artificial General Intelligence (AGI). Larson delves into the philosophical and scientific misunderstandings surrounding AI, challenging the dominant narrative that AGI is just around the corner. Drawing from his expertise and experience in the field, Larson explains why much of the AI hype lacks empirical foundation. He emphasizes the limits of current AI models, particularly their reliance on inductive reasoning, which, though powerful, is insufficient for achieving human-like intelligence.

Larson discusses how the field of AI has historically blended speculative futurism with genuine technological advancements, often fueled by financial incentives rather than scientific rigor. He highlights how this approach has led to misconceptions about AI’s capabilities, especially in the context of AGI. Drawing connections to philosophical theories of inference, Larson introduces deductive, inductive, and abductive reasoning, explaining how current AI systems fall short in their over-reliance on inductive methods. The conversation touches on the challenges of abduction (the "broken" form of reasoning humans often use) and the difficulty of replicating this in AI systems.

Throughout the discussion, we explore the social and ethical implications of AI, including concerns about data limitations, the dangers of synthetic data, and the looming “data wall” that could hinder future AI progress. We also touch on broader societal impacts, such as how AI’s potential misuse and over-reliance might affect innovation and human intelligence.

  continue reading

71 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas