-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rounding scheme for neuroglancer fetching needs revision #642
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #642 +/- ##
==========================================
- Coverage 45.34% 45.32% -0.03%
==========================================
Files 75 75
Lines 7229 7228 -1
==========================================
- Hits 3278 3276 -2
- Misses 3951 3952 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This is by design, no? We precisely do not want the user to deal in voxel space. So when bbox is provided, we try to satistify the user's request based on:
This will produce some weird edge cases - which is significantly magnified when user provides bbox with dimension similar or smaller than that of the voxel size. In your case, if you definitely want a single voxel thickness, I would argue you would be fetching from (36.58,-0.09,8.33)mm to (38.64,-0.09,11.71)mm
The two guidelines above aimed to do just that (but perhaps poorly documented)
This is already like not possible, since the 1um slices cannot have perfect cube larger than 1 x 1 x 1
This will create weird edge cases where we now need to make a decision, if the bbox sits exactly in the middle of two voxels. We will have to at least create one more rule to disambiguate this edge case edit: I can't comment on the comment at the moment. I can take a look on monday. |
(36.58,-0.10,8.33)mm to (38.64,-0.08,11.71)mm corrosponds to voxels bbox from (3330.127590551181, 3495.9708333333338, 5164.0913700787405) to (3170.4425511811023, 3494.9708333333338, 5066.7685354330715). While the difference for the y axis have exactly one voxel length, the question is defining how exactly siibra should handle data with 1 voxel length and intersecting with two voxels. Currently, siibra returns both of the voxels by taking the floor of the lower and ceil of the higher. One can use round for both to return exactly one voxel but this standard is not defined in siibra yet. This PR will be morphed to determine this standard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing dim increase is necessary. However, imo, the voxel selection calculation is correct. Please use debug mode to test the provided bounding boxes; if a bounding box touches more than one voxel, they are both selected. I could not reproduce a naturally occuring example that, say selects 2 voxels by floating errors.
When fetching an axis aligned flat volume of interest translating to exactly 1 voxel "thickness" in one dimension, the NeuroglancerScale fetching method rounds up to a 2 voxel array. In my example, fetching in bigbrain space with
Bounding box from (36.58,-0.10,8.33)mm to (38.64,-0.08,11.71)mm in BigBrain microscopic template (histology) space
retrieved an image array of shape (160, 2, 99).This is critical, since several analysis workflows that I had implemented with siibra rely on the possibility to fetch a 2D patch from properly sized bounding boxes in physical coordinates, and would now confront me to resolve the ambiguity downstream.
The problem is caused by taking
floor
of the minpoint andceil
of the max point to compute the target shape as their difference, resulting in a doubling of integer rounding in the extreme case. The suggested change computes the rounded target shape as theceil
of the difference of the unrounded max- and midpoints instead, and adds it to thefloor
ed midpoint to obtain the rounded max point, therefore always only triggering a single rounding effect. Using the fix, the aforementioned bounding box yields an array of shape(160, 1, 98)
after fetching, as expected.The previous explicit additional clipping of voxel dimensions smaller than 1 seemed incorrect to me, and did not influence the result after the fix anyways, so I removed it.
The proposed change fixes the said problem, but it should be carefully thought through and tested on other examples. I suggest to define a range of standard and border cases of VOIs to double-check that fetching works as expected: