Skip to content

Implementation of "GraphMamba: Whole Slide Image Classification Meets Graph-driven Selective State Space Model".

Notifications You must be signed in to change notification settings

titizheng/GraphMamba

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

GraphMamba: Whole Slide Image Classification Meets Graph-driven Selective State Space Model

Tingting Zheng, Hongxun Yao, Sicheng Zhao., Kui jiang, Yi Xiao.

Abstract: Multi-instance learning (MIL) has demonstrated promising performance in whole slide image (WSI) analysis. However, existing transformer-based methods struggle back and forth between global representation capability and quadratic complexity, particularly when handling millions of instances. Recently, the selective state space model (Mamba) has emerged as a promising alternative for modeling long-range dependencies with linear complexity. Nonetheless, WSI remains challenging for Mamba due to its inability to capture complex local tissue and structural patterns, which is crucial for accurate tumor region recognition. To this end, we approach WSI classification from a graph-based perspective and present GraphMamba, a novel method that constructs multi-level graphs across instances. GraphMamba involves two key components: intra-group graph mamba (IGM) to grasp instance-level dependencies, and cross-group graph mamba (CGM) for exploring group-level relationships. In particular, before aggregating group features into a comprehensive bag representation, CGM utilizes a cross-group feature sampling scheme to extract the most informative features across groups, enabling compact and discriminative representations. Extensive experiments on four datasets demonstrate that GraphMamba outperforms state-of-the-art ACMIL method by 0.5%, 3.1%, 2.6%, and 3.0% in accuracy on the TCGA BRCA, TCGA Lung, TCGA ESCA, and BRACS datasets.

Update

  • [2025/03/10] Uploading groups and building graph ways

Pre-requisites:

  • Linux (Tested on Ubuntu 18.04)
  • NVIDIA GPU (Tested on 3090)

Dependencies:

torch
torchvision
numpy
h5py
scipy
scikit-learning
pandas
nystrom_attention
admin_torch

The data used for training, validation and testing are expected to be organized as follows:

DATA_ROOT_DIR/
    ├──DATASET_1_DATA_DIR/
        └── pt_files
                ├── slide_1.pt
                ├── slide_2.pt
                └── ...
        └── h5_files
                ├── slide_1.h5
                ├── slide_2.h5
                └── ...
    ├──DATASET_2_DATA_DIR/
        └── pt_files
                ├── slide_a.pt
                ├── slide_b.pt
                └── ...
        └── h5_files
                ├── slide_i.h5
                ├── slide_ii.h5
                └── ...
    └── ...

About

Implementation of "GraphMamba: Whole Slide Image Classification Meets Graph-driven Selective State Space Model".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages