You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the memory mapped numpy arrays used in decoding sample ~20k random voxels from the whole brain. NeuroVault resamples images to isotropic 4 mm voxels, which gives ~30k voxels. It would probably speed up processing a bit to use the same scheme in Neurosynth, plus results might be slightly more representative using resampled voxels rather than random ones.
The text was updated successfully, but these errors were encountered:
Well, realistically, this might take us from ~0.5s per decoding to ~0.3s, so I'm not sure it's worth upping the priority. But next time I'm mucking around with that part of the code, I'll add this. :)
Currently the memory mapped numpy arrays used in decoding sample ~20k random voxels from the whole brain. NeuroVault resamples images to isotropic 4 mm voxels, which gives ~30k voxels. It would probably speed up processing a bit to use the same scheme in Neurosynth, plus results might be slightly more representative using resampled voxels rather than random ones.
The text was updated successfully, but these errors were encountered: