Use the ObzAI library locally to detect outliers in lung nodules and explain in the ViT classifier for cancer status.
torch.nn.Linear
) onto a DINO backbone.
FirstOrderExtractor
is a straightforward and fast tool designed to extract first-order statistical features from images. These features summarize general properties of the pixel intensity values, such as mean, variance, skewness, etc. For example, they are useful for identifying images that are overly bright/dark or excessively variable in their intensities compared to the Reference dataset.
Note: First-order statistical features are invariant to the arrangement of pixels; in other words, they do not capture spatial relationships within the image.
At any point, you can view the list of features that FirstOrderExtractor
computes by accessing its .feature_names
attribute. This helps you to understand exactly which statistics are being extracted from your images.
GMMDetector
is an outlier detection method that utilizes a Gaussian Mixture Model (GMM). To configure and use the GMMDetector
, please follow the steps below:
extractors
- Sequence of Extractor objects which process your data. Currently, only the FirstOrderExtractor
is accepted.n_components
- A number of Gaussian components for the mixture model. This controls the complexity of the model and how finely it can separate data clusters.outlier_quantile
- Set the quantile threshold to determine what is considered an outlier. Data points falling below this quantile are classified as outliers.show_progress
- If set to True
, a progress bar will be displayed during feature extraction to visualize operation progress..fit
method with a reference data. Ensure that the data you want to model comes in the form of a PyTorch DataLoader
object.
.detect()
method. This method returns a named tuple with:
img_features
- extracted features for each image in the batch.outliers
- boolean vector indicating if samples in the batch are outliers.NoduleMNIST
.
XAITool
cdam_tool
- It is an excellent explainability method, highly discriminative with regards to the target class.smooth_grad_tool
- Classical and simple XAI method.attention_tool
- Classical way to inspect ViT like models.XAIEval
fidelity_tool
measures how accurately a given XAI method reflects the model’s true decision process. It does this by systematically perturbing input features based on their importance scores and observing the resulting change in the model performance.
compactness_tool
evaluates how sparse and concentrated the importance scores are. A more compact set of importance scores is often easier for humans to interpret, as it highlights the most relevant features in a concise manner.
By using these tools, you can better understand and compare the effectiveness and interpretability of different XAI approaches.
First, instantiate both evaluation methods: