Step 1: Install all dependencies
git clone https://github.com/olopade-lab/sv-pipeline
bash ./dependencies.sh
Note: If you are using Lumpy, don't forget to export the path of the bin with the real as shown below:
export PATH=$PATH:~/lumpy-sv/bin
Step 2: Build your own inputs.json file or run the get_files.sh script to download test files. If you make your own inputs.json file, replace the original one provided in this repository. Make sure to specify 2 inputs in the json file for every set of samples. Example: normal-bam-2, tumor-bam-2. The script will find the number of samples on its own and build the samples.tsv file automatically as well.
bash ./get_files.sh
Step 3: Run the python file with inputs.json as secondary file, Parsl configuration as third file (Select the right configuration from the configs folder) and type of caller as the third parameter.
Type of callers current available in this script: delly, lumpy
python3 run_caller.py test_inputs.json igsb_jupyter delly
Step 1: CD into a folder that contains all the .bam files, the index files and the genome reference file. If you want to use the test inputs, you may run the following commmand to download them:
bash ./get_files.sh
Step 2: Pull the Docker image.
docker pull dellytools/delly
Step 3: Initialise and run the Docker Image. (Note, I have initialised the root directory in the present working directory of the user)
docker run -it -v $(pwd):/root dellytools/delly
Step 4: Run Delly inside the container:
delly call -o /root/sv.bcf -g /root/genome.fa /root/HCC1143_ds/HCC1143.bam /root/HCC1143_ds/HCC1143_BL.bam
Step 5: Exit the container:
exit