Artwork

Kandungan disediakan oleh Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.
Player FM - Aplikasi Podcast
Pergi ke luar talian dengan aplikasi Player FM !

From Data to Performance: Understanding and Improving Your AI Model

26:42
 
Kongsi
 

Manage episode 518800402 series 2487640
Kandungan disediakan oleh Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Modern data analytic methods and tools—including artificial intelligence (AI) and machine learning (ML) classifiers—are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs.

Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the U.S. government increases its use of AI classifiers and predictors, these issues multiply (or use increase again). Subsequently, users may grow to distrust results. To address inaccurate erroneous correlations and predictions, we need new methods for ongoing testing and evaluation of AI and ML accuracy. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Nicholas Testa, a senior data scientist in the SEI's Software Solutions Division (SSD), and Crisanne Nolan, and Agile transformation engineer, also in SSD, sit down with Linda Parker Gates, Principal Investigator for this research and initiative lead for Software Acquisition Pathways at the SEI, to discuss the AI Robustness (AIR) tool, which allows users to gauge AI and ML classifier performance with data-based confidence.

  continue reading

427 episod

Artwork
iconKongsi
 
Manage episode 518800402 series 2487640
Kandungan disediakan oleh Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Semua kandungan podcast termasuk episod, grafik dan perihalan podcast dimuat naik dan disediakan terus oleh Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute atau rakan kongsi platform podcast mereka. Jika anda percaya seseorang menggunakan karya berhak cipta anda tanpa kebenaran anda, anda boleh mengikuti proses yang digariskan di sini https://ms.player.fm/legal.

Modern data analytic methods and tools—including artificial intelligence (AI) and machine learning (ML) classifiers—are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs.

Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the U.S. government increases its use of AI classifiers and predictors, these issues multiply (or use increase again). Subsequently, users may grow to distrust results. To address inaccurate erroneous correlations and predictions, we need new methods for ongoing testing and evaluation of AI and ML accuracy. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Nicholas Testa, a senior data scientist in the SEI's Software Solutions Division (SSD), and Crisanne Nolan, and Agile transformation engineer, also in SSD, sit down with Linda Parker Gates, Principal Investigator for this research and initiative lead for Software Acquisition Pathways at the SEI, to discuss the AI Robustness (AIR) tool, which allows users to gauge AI and ML classifier performance with data-based confidence.

  continue reading

427 episod

Semua episod

×
 
Loading …

Selamat datang ke Player FM

Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.

 

Panduan Rujukan Pantas

Podcast Teratas
Dengar rancangan ini semasa anda meneroka
Main