How We Audit ML Systems for Risk, Drift, and Misuse
About This Session
As machine learning systems become deeply embedded in products, it’s not just accuracy that matters-- it’s accountability. This talk covers our internal approach to proactively identifying risks in ML workflows, from unintentional bias to model drift and even potential misuse. I’ll walk through how we adapted standard DevSecOps patterns (like monitoring, alerting, versioning) to the ML stack, and how we created a lightweight review system for ethical red flags.
Speaker

Saloni Garg
Senior Software Engineer - Wayfair
International Red Hat Women in Open Source Awardee | Mozilla Open Leader 2019 | a strong open source diversity supporter | Google Venkat Scholarship winner | Speaker