Artwork

Kandungan disediakan oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

Using LLMs to Evaluate Code

1:02:10
 
Kongsi
 

Manage episode 509954461 series 1264075
Kandungan disediakan oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episod

Artwork
iconKongsi
 
Manage episode 509954461 series 1264075
Kandungan disediakan oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Carnegie Mellon University Software Engineering Institute and SEI Members of Technical Staff atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results.

What Will Attendees Learn?

• how well LLMs can evaluate source code

• evolution of capability as new LLMs are released

• how to address potential gaps in capability

  continue reading

174 episod

All episodes

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main