How to show biases in ML models ?

The What-If Tool: Code-Free Probing of Machine Learning Models

The tool Lets users analyze an ML model without writing code. Given pointers to a TensorFlow model and a datase.

  • Allows users to visualize possible bias in machine learning models

How would changes to a datapoint affect my model’s prediction? Does it perform differently for various groups–for example, historically marginalized people? How diverse is the dataset I am testing my model on?

The What-If Tool, showing a set of 250 face pictures and their results from a model that detects smiles.

Last updated

Was this helpful?