Skip to content

Supplementary material for the paper: "Instrument Recognition in Laparoscopy for Technical Skill Assessment"

Notifications You must be signed in to change notification settings

skletz/instseg34405-gyn

Repository files navigation

InstSeg34405-GYN

This repository supplements the paper "Instrument Recognition in Laparoscopy for Technical Skill Assessment". This implementation generates segmentation masks for each instrument, visible in gynecologic laparoscopy surgeries.

Provided artifacts:

Installation

  • Set up a Python3 environment
  • Install PyTorch 1.1.0 and TorchVision 0.3.0
  • Install dependencies:
pip install -r ./requirements.txt
  • Clone this repository
  • Set up repository with submodules
# Hints: Cloning, Pulling and Pushing with submodules
# clone this repo with submodules
git clone --recursive [URL to Git repo]
# pull all changes to this reopo and to submodules
git pull --recurse-submodules
# pull all changes to submodules only
git submodule update --remote
# push all changes
git submodule foreach git push origin master
# add submodule afterwards
cd ./
git submodule add -b master --name torchvision_mrcnn https://github.com/skletz/torchvision-mrcnn.git model/torchvision_mrcnn
git submodule init

Training and Validation

cd experiment/.envs
nano .train
# set env for EXP_ROOT_DIR
EXP_ROOT_DIR=/path/to
  • Set the path to dataset's input directory
DATA=${EXP_ROOT_DIR}/datasets
  • Set the path to model's output directory
MODEL=${EXP_ROOT_DIR}/experiments
  • Start training script
train.py --model_dir=${MODEL} --data_dir=${DATA}

Download the Dataset

# Download link
# In progress ...

Usage

  • Set up env variables
cd demo/.envs
nano .run_video
# set env for CUR_ROOT_DIR
CUR_ROOT_DIR=/path/to
INPUT_VDO=${CUR_ROOT_DIR}/video.mp4
OUTPUT_DIR=${CUR_ROOT_DIR}/output/
MODEL=${CUR_ROOT_DIR}/path/to/model/instseg34405-gyn.pth.tar
  • Start script
run_video.py --input=${INPUT_VDO} --output=${OUTPUT_DIR} --model=${MODEL}

Download the Model

Resulting model at epoch 48, i.e., after 5,856 iterations with 122 batches per epoch and 2 images per batch:

# Download link
# In progress ...

Experimental Evaluation

Quantitative Results

Average precision and loss during training for 50 epochs.
bbox=bounding box, segm=segmentation mask, t=training, tv=validation
Individual training errors. Individual validation errors.

Qualitative Results

Grasping example.

Coagulation example.

Citation

If you use this code base or the model trained on instseg34405-gyn in your work, please cite:

@inproceedings{kletz2020instseg34405,
  author="Kletz, Sabrina and Schoeffmann, Klaus and Leibetseder, Andreas
  and Benois-Pineau, Jenny and Husslein, Heinrich",
  title="Instrument Recognition in Laparoscopy for Technical Skill Assessment",
  year="2020",
  pages="589--600",
  isbn="978-3-030-37734-2",
  doi="10.1007/978-3-030-37734-2_48",
  url="https://link.springer.com/chapter/10.1007/978-3-030-37734-2_48",
  booktitle="MultiMedia Modeling",
  editor="Ro, Yong Man and Cheng, Wen-Huang and Kim, Junmo and Chu, Wei-Ta and Cui, Peng and Choi, Jung-Woo and Hu, Min-Chun and De Neve, Wesley",
  publisher="Springer International Publishing",
  address="Cham"
}

Contact

For questions about our paper or code, please contact Sabrina Kletz.

About

Supplementary material for the paper: "Instrument Recognition in Laparoscopy for Technical Skill Assessment"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages