Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dataloader's workers are out of shared memory #15

Open
bjzxnsjj opened this issue Aug 2, 2024 · 1 comment
Open

dataloader's workers are out of shared memory #15

bjzxnsjj opened this issue Aug 2, 2024 · 1 comment

Comments

@bjzxnsjj
Copy link

bjzxnsjj commented Aug 2, 2024

Hello,

I am very interested in trying out your placenta slice inference pipeline, but I lack a computer science background. Currently, I am attempting to run cell_inference.py on my personal PC using the sample_wsi demo data. I seem to be encountering an issue with insufficient memory, as indicated by the following error message:

RuntimeError: DataLoader worker (pid 80447) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

I then modified the command to --nuc-num-workers 0 --cell-num-workers 0, and the output was as follows:

generating tile coordinates rows: 11, columns: 12 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 86399.52it/s] loading datasets datasets loaded creating dataloader dataloader ready 0%|

The main issue is that the process seems to be stuck, with the progress bar not advancing even after a long wait. Therefore, I would like to know the memory requirements for running the inference and whether GPU support is necessary.

Thank you!

@ChristofferNellaker
Copy link

Sorry for the delayed response. I am not sure what it does when the workers are set to 0, I would suggest trying with minimum 1. These are basically how many CPU threads try to help load data onto the GPU, so it is not immediately obvious to me how they run out of shared memory unless they all slam the remaining RAM you have. Try with one as this should be just one thread loading and image and passing it to the GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants