Talk
As machine learning models become more complex, understanding why they make certain predictions is becoming just as important as the predictions themselves. Whether you're dealing with business stakeholders, regulators, or just debugging unexpected results, the ability to explain your model is no longer optional , it's essential. In this talk, we'll walk through practical tools in the Python ecosystem that help bring transparency to your models, including SHAP, LIME, and Captum. Through hands-on examples, you'll learn how to apply these libraries to real-world models from decision trees to deep neural networks and make sense of what's happening under the hood. If you've ever struggled to explain your model’s output or justify its decisions, this session will give you a toolkit to build more trustworthy, interpretable systems without sacrificing performance.
The prerequisites of this talk are:- 1.Basic familiarity with machine learning concepts 2.Working knowledge of Python 3.Some experience training or using ML models
We’ve all been there, your machine learning model performs well in testing, but when it comes time to explain why it made a specific prediction, things get murky. In many real-world applications, especially in domains like healthcare, finance, or operations, being able to explain your model isn’t just helpful it’s critical.This talk is a practical walkthrough of explainable AI (XAI) tools in Python, aimed at data scientists and engineers who want to make their models more transparent and trustworthy. We’ll cover libraries like SHAP, LIME, and Captum, and show how to use them to generate both local and global explanations for models ranging from random forests to deep neural nets.You’ll see hands-on examples, common pitfalls to avoid, and ideas for integrating interpretability into your workflow whether you’re trying to debug your model or justify its predictions to a non-technical stakeholder.If you’ve ever wanted to better understand your own models or help others trust them this session is for you.
Data Engineer
Yashasvi Misra is a Data Engineer at Pure Storage and Chair of the NumFOCUS Code of Conduct Working Group, where she helps foster inclusive practices across the open-source ecosystem. She has contributed to foundational projects like NumPy and has been an active part of the Python community since her college days. Yashasvi is also a passionate advocate for diversity and inclusion in tech. She introduced a period leave policy at a previous organisation and continues to work toward building more equitable workplaces. She has shared her work and insights at conferences around the world, including PyCon India, PyCon Europe, PyLadiesCon, and PyData Global.