Pergi ke luar talian dengan aplikasi Player FM !
Practical Offensive and Adversarial ML for Red Teams
Manage episode 425466822 series 3461851
Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood.
Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.
The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 episod
Manage episode 425466822 series 3461851
Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood.
Adrian joined Protect AI threat researchers, Dan McInerney and Marcello Salvati, in the studio to share an array of insights, including what inspired him to create the Offensive ML (aka OffSec ML) Playbook, and diving into categories like adversarial machine learning (ML), offensive/defensive ML, and supply chain attacks.
The group also discusses dual uses for "traditional" ML and LLMs in the realm of security, the rise of agentic LLMs, and the potential for crown jewel data leakage via model malware (i.e. highly valuable and sensitive data being leaked out of an organization due to malicious software embedded within machine learning models or AI systems).
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
50 episod
Semua episod
×
1 What’s Hot in AI Security at RSA Conference 2025? 24:14

1 Unpacking the Cloud Security Alliance AI Controls Matrix 35:53

1 From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains 41:21

1 Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection 36:52

1 AI Security: Map It, Manage It, Master It 41:18

1 Agentic AI: Tackling Data, Security, and Compliance Risks 23:22

1 AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits 24:08

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39

1 Unpacking Generative AI Red Teaming and Practical Security Solutions 51:53

1 AI Security: Vulnerability Detection and Hidden Model File Risks 38:19

1 AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk 37:41

1 Crossroads: AI, Cybersecurity, and How to Prepare for What's Next 33:15

1 AI Beyond the Hype: Lessons from Cloud on Risk and Security 41:06

1 Generative AI Prompt Hacking and Its Impact on AI Security & Safety 31:59

1 The MLSecOps Podcast Season 2 Finale 40:54

1 Exploring Generative AI Risk Assessment and Regulatory Compliance 37:37

1 MLSecOps Culture: Considerations for AI Development and Security Teams 38:44

1 Practical Offensive and Adversarial ML for Red Teams 35:24

1 Expert Talk from RSA Conference: Securing Generative AI 25:42

1 Practical Foundations for Securing AI 38:10

1 Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex 31:04

1 AI Threat Research: Spotlight on the Huntr Community 31:48

1 Securing AI: The Role of People, Processes & Tools in MLSecOps 37:16

1 ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance 35:30

1 Finding a Balance: LLMs, Innovation, and Security 41:56

1 Secure AI Implementation and Governance 38:37

1 Risk Management and Enhanced Security Practices for AI Systems 38:08

1 Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations 41:19

1 From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus 43:20

1 Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP 39:45
Selamat datang ke Player FM
Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.