|
1 | 1 | # ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition
|
2 |
| -This repository presents the PyTorch code for Neural Prototype Trees (ProtoTrees). Check out our [video](https://videos.mysimpleshow.com/qyZYnaTBHv) for a short introduction! |
| 2 | +This repository presents the PyTorch code for Neural Prototype Trees (ProtoTrees), published at CVPR 2021: ["Neural Prototype Trees for Interpretable Fine-grained Image Recognition"](https://openaccess.thecvf.com/content/CVPR2021/html/Nauta_Neural_Prototype_Trees_for_Interpretable_Fine-Grained_Image_Recognition_CVPR_2021_paper.html). |
3 | 3 |
|
4 | 4 | A ProtoTree is an intrinsically interpretable deep learning method for fine-grained image recognition. It includes prototypes in an interpretable decision tree to faithfully visualize the entire model. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird!
|
5 | 5 |
|
6 |
| -Corresponding paper on ArXiv: ["Neural Prototype Trees for Interpretable Fine-grained Image Recognition"](https://arxiv.org/abs/2012.02046) |
7 |
| - |
8 | 6 | 
|
9 | 7 | Figure shows an example of a ProtoTree. A ProtoTree is a globally interpretable model faithfully explaining its entire behaviour (left, partially shown) and additionally the reasoning process for a single prediction can be followed (right): the presence of a red chest and black wing, and the absence of a black stripe near the eye, identifies a Scarlet Tanager.
|
10 | 8 |
|
@@ -44,14 +42,15 @@ The folder `preprocess_data` contains python code to download, extract and prepr
|
44 | 42 | ## Training a ProtoTree
|
45 | 43 | 1. create a folder ./runs
|
46 | 44 |
|
47 |
| -A ProtoTree can be trained by running `main_tree.py` with arguments. An example for CUB: `main_tree.py --epochs 100 --log_dir ./runs/protoree_cub --dataset CUB-200-2011 --lr 0.001 --lr_block 0.001 --lr_net 1e-5 --num_features 256 --depth 9 --net resnet50_inat --freeze_epochs 30 --milestones 60,70,80,90,100` |
| 45 | +A ProtoTree can be trained by running `main_tree.py` with arguments. An example for CUB: `main_tree.py --epochs 100 --log_dir ./runs/protoree_cub --dataset CUB-200-2011 --lr 0.001 --lr_block 0.001 --lr_net 1e-5 --num_features 256 --depth 9 --net resnet50_inat --freeze_epochs 30 --milestones 60,70,80,90,100` To speed up the training process, the number of workers of the [DataLoaders](https://github.com/M-Nauta/ProtoTree/blob/main/util/data.py#L39) can be increased by setting `num_workers` to a positive integer value (suitable number depends on your available memory). |
48 | 46 |
|
49 | 47 | Check your `--log_dir` to keep track of the training progress. This directory contains `log_epoch_overview.csv` which prints per epoch the test accuracy, mean training accuracy and the mean loss. File `log_train_epochs_losses.csv` prints the loss value and training accuracy per batch iteration. File `log.txt` logs additional info.
|
50 | 48 |
|
51 | 49 | The resulting visualized prototree (i.e. *global explanation*) is saved as a pdf in your `--log_dir /pruned_and_projected/treevis.pdf`. NOTE: this pdf can get large which is not supported by Adobe Acrobat Reader. Open it with e.g. Google Chrome or Apple Preview.
|
52 | 50 |
|
53 | 51 | To train and evaluate an ensemble of ProtoTrees, run `main_ensemble.py` with the same arguments as for `main_tree.py`, but include the `--nr_trees_ensemble` to indicate the number of trees in the ensemble.
|
54 | 52 |
|
| 53 | + |
55 | 54 | ### Local explanations
|
56 | 55 | A trained ProtoTree is intrinsically interpretable and globally explainable. It can also *locally* explain a prediction. Run e.g. the following command to explain a single test image:
|
57 | 56 |
|
|
0 commit comments