Pergi ke luar talian dengan aplikasi Player FM !
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Manage episode 360638060 series 3461851
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?
The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.
This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 episod
Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA
Manage episode 360638060 series 3461851
What is AI bias and how does it impact both organizations and individual members of society? How does one detect if they’ve been impacted by AI bias? What can be done to prevent or mitigate it? Can AI/ML systems be audited for bias and, if so, how?
The MLSecOps Podcast explores these questions and more with guest Cari Miller, Founder of the Center for Inclusive Change and member of the For Humanity Board of Directors.
This week’s episode delves into the controversial topics of Trusted and Ethical AI within the realm of MLSecOps, offering insightful discussion and thoughtful perspectives. It also highlights the importance of continuing the conversation around AI bias and working toward creating more ethical and fair AI/ML systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
41 episod
Minden epizód
×1 AI Security: Vulnerability Detection and Hidden Model File Risks 38:19
1 AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk 37:41
1 Crossroads: AI, Cybersecurity, and How to Prepare for What's Next 33:15
1 AI Beyond the Hype: Lessons from Cloud on Risk and Security 41:06
1 Generative AI Prompt Hacking and Its Impact on AI Security & Safety 31:59
1 The MLSecOps Podcast Season 2 Finale 40:54
1 Exploring Generative AI Risk Assessment and Regulatory Compliance 37:37
1 MLSecOps Culture: Considerations for AI Development and Security Teams 38:44
1 Practical Offensive and Adversarial ML for Red Teams 35:24
1 Expert Talk from RSA Conference: Securing Generative AI 25:42
1 Practical Foundations for Securing AI 38:10
1 Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex 31:04
1 AI Threat Research: Spotlight on the Huntr Community 31:48
1 Securing AI: The Role of People, Processes & Tools in MLSecOps 37:16
1 ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance 35:30
Selamat datang ke Player FM
Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.