Artwork

Kandungan disediakan oleh LessWrong. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh LessWrong atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

“Tracing the Thoughts of a Large Language Model” by Adam Jermyn

22:18
 
Kongsi
 

Manage episode 473912133 series 3364758
Kandungan disediakan oleh LessWrong. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh LessWrong atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
[This is our blog post on the papers, which can be found at https://transformer-circuits.pub/2025/attribution-graphs/biology.html and https://transformer-circuits.pub/2025/attribution-graphs/methods.html.]
Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don’t understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to. For example:
  • Claude can speak dozens of languages. What language, if any, is it using "in its head"?
  • Claude writes text one word at a time. Is it only focusing on predicting the [...]
---
Outline:
(06:02) How is Claude multilingual?
(07:43) Does Claude plan its rhymes?
(09:58) Mental Math
(12:04) Are Claude's explanations always faithful?
(15:27) Multi-step Reasoning
(17:09) Hallucinations
(19:36) Jailbreaks
---
First published:
March 27th, 2025
Source:
https://www.lesswrong.com/posts/zsr4rWRASxwmgXfmq/tracing-the-thoughts-of-a-large-language-model
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Simplified diagram showing multilingual concept mapping between
Semantic parsing diagram showing relationships between words in a sentence.
Conversation showing explanation of basic addition using standard algorithm.
Diagram showing three variations of a rhyming couplet about a carrot.
Flowchart showing step-by-step square root calculation with mathematical reasoning.
Two flowchart diagrams comparing responses for Michael Jordan and Michael Batkin queries.


The image shows a flowchart illustrating
  continue reading

490 episod

Artwork
iconKongsi
 
Manage episode 473912133 series 3364758
Kandungan disediakan oleh LessWrong. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh LessWrong atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
[This is our blog post on the papers, which can be found at https://transformer-circuits.pub/2025/attribution-graphs/biology.html and https://transformer-circuits.pub/2025/attribution-graphs/methods.html.]
Language models like Claude aren't programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model's developers. This means that we don’t understand how models do most of the things they do.
Knowing how models like Claude think would allow us to have a better understanding of their abilities, as well as help us ensure that they’re doing what we intend them to. For example:
  • Claude can speak dozens of languages. What language, if any, is it using "in its head"?
  • Claude writes text one word at a time. Is it only focusing on predicting the [...]
---
Outline:
(06:02) How is Claude multilingual?
(07:43) Does Claude plan its rhymes?
(09:58) Mental Math
(12:04) Are Claude's explanations always faithful?
(15:27) Multi-step Reasoning
(17:09) Hallucinations
(19:36) Jailbreaks
---
First published:
March 27th, 2025
Source:
https://www.lesswrong.com/posts/zsr4rWRASxwmgXfmq/tracing-the-thoughts-of-a-large-language-model
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Simplified diagram showing multilingual concept mapping between
Semantic parsing diagram showing relationships between words in a sentence.
Conversation showing explanation of basic addition using standard algorithm.
Diagram showing three variations of a rhyming couplet about a carrot.
Flowchart showing step-by-step square root calculation with mathematical reasoning.
Two flowchart diagrams comparing responses for Michael Jordan and Michael Batkin queries.


The image shows a flowchart illustrating
  continue reading

490 episod

Все серии

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main