EchoNet-Labs is an end-to-end deep learning model for predicting 14 different biomarkers and lab values from echocardiogram videos.
For more details, see the accompanying paper
Deep learning evaluation of biomarkers from echocardiogram videos Hughes JW, Yuan N, He B, Ouyang J, Ebinger J, Botting P, Lee J, Theurer J, Tooley JE, Nieman K, Lungren MP, Liang DH, Schnittger I, Chen JH, Ashley EA, Cheng S, Ouyang D, Zou JY. EBioMedicine October 14, 2021. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8524103/
EchoNet-Labs performs well predicting a range of lab values both on data from the medical system where it was trained and other medical centers:
First, clone this repository and enter the directory by running:
git clone https://github.com/echonet/labs.git
cd labs
EchoNet-Labs is implemented for Python 3, and depends on the following packages:
- NumPy
- PyTorch
- Torchvision
- OpenCV
- skimage
- sklearn
- tqdm
Echonet-Labs and its dependencies can be installed by navigating to the cloned directory and running
pip install --user .
The input of EchoNet-Labs is an apical-4-chamber view echocardiogram video of any length. The easiest way to run our code is to use videos from our dataset, but we also provide in EchoNet-Dynamic a notebook ConvertDICOMToAVI.ipynb
, to convert DICOM files to AVI files used for input to EchoNet-Dynamic and EchoNet-Labs. The Notebook deidentifies the video by cropping out information outside of the ultrasound sector, resizes the input video, and saves the video in AVI format.
By default, EchoNet-Dynamic assumes that a copy of the data is saved in a folder named a4c-video-dir/
in this directory.
This path can be changed by creating a configuration file named echonet.cfg
(an example configuration file is example.cfg
). This path can also be overwritten as an argument to echonet.utils.video.run
.
Echonet-Labs trains models to predict lab values based on both full video data and ablated input data, to better understand which features are necessary to make predictions
cmd="import echonet; echonet.utils.video.run(modelname=\"r2plus1d_18\",
tasks=\"logBNP\",
frames=32,
period=2,
pretrained=True,
batch_size=8)"
python3 -c "${cmd}"
This creates a directory in output/video
, which will contain
log.csv
: training and validation lossesbest.pt
: checkpoint of weights for the model with the lowest validation lossvalid_predictions.csv
: estimates of logBNP on the validation set. Running again settingtest=True
will produce test_predictions.csv
Setting segmentation_mode="only"
trains and validates a model solely on segmentations produced from EchoNet-dynamic (segmentations need to be pre-generated). Setting segmentation_mode="both"
trains and validates a model with only the left ventricle visible. Setting single_repeated=True
trains and video model on a single frame of input.
Deep Learning Prediction of Biomarkers from Echocardiogram Videos J. Weston Hughes, Neal Yuan, Bryan He, Jiahong Ouyang, Joseph Ebinger, Patrick Botting, Jasper Lee, James E. Tooley, Koen Neiman, Matthew P. Lungren, David Liang, Ingela Schnittger, Robert A. Harrington, Jonathan H. Chen, Euan Ashley, Susan Cheng, David Ouyang, James Zou. EBioMedicine, October 14, 2021.
Video-based AI for beat-to-beat assessment of cardiac function
David Ouyang, Bryan He, Amirata Ghorbani, Neal Yuan, Joseph Ebinger, Curt P. Langlotz, Paul A. Heidenreich, Robert A. Harrington, David H. Liang, Euan A. Ashley, and James Y. Zou. Nature, March 25, 2020.