Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

best practices for real time speed/efficiency . #6004

Closed
arothenberg opened this issue Mar 9, 2020 · 10 comments
Closed

best practices for real time speed/efficiency . #6004

arothenberg opened this issue Mar 9, 2020 · 10 comments

Comments

@arothenberg
Copy link

arothenberg commented Mar 9, 2020


Required Info
Camera Model D435
Firmware Version 05.12.02.100
Operating System & Version {Win (10)
                       | 

| Platform | PC |
| SDK Version | 2.32 |
| Language | C++ |
| Segment ||desktop windows app for now |

I'm specifically concerned about post processing and manipulation of frame data.

I was using the standard realsense support forum, which was good IMO, but when I started bringing in code I was told to switch to github. If github 'issues' is the wrong place for my concerns then please inform me.

My concerns are similar to #4026
Unfortunately OP switched to matlab which I don't want to do.

I need to track and analyze x,y,z on a 'fast' moving blob. The accuracy of the coordinates and their depths can be 'close'(not perfect - I'll be averaging depths). I need to analyze as many fps as possible. I'll be doing some averaging and weighting over a blob, per frame.

  1. As far as post processing I am eyeballing the results from the depth quality tool and using the guides that are available. I assume I'll have to experiment a bit to find the right settings. Is there is a typical setup given my concerns?
  2. As far as frame data, I am going to keep it raw and not convert to human-friendly units and use 16 bit integers as per issue:4026 (link above). I'm also going to use the code in rs-align-advanced example as my guide, as suggested.

Is there anything I'm getting wrong or missing?

@MartyG-RealSense
Copy link
Collaborator

The CTO of the RealSense Group at Intel (agrunnet) offers advice in the link below about tracking fast objects that are also very small (a bee in that particular case).

#4175 (comment)

  1. In regards to default test settings, the RealSense Viewer has a selection of post processing filters (not all of them) enabled by default. You could expand open the filters in the options side-panel to see what values are considered to be well-balanced default settings, assuming many people will be using the Viewer without editing this pre-set configuration.

  2. The RealSense SDK is already very efficient, so it is debatable how much performance could be gained by using raw code instead of user-friendly high level API instructions. The option to go low-level and "hit the metal" of the hardware (as 80's computer game programmers using machine code used to say) is certainly there to explore.

You may find interesting the documentation on the RealSense SDK's Low Level API.

https://dev.intelrealsense.com/docs/api-architecture

@arothenberg
Copy link
Author

arothenberg commented Mar 9, 2020

Thanks. I have played with the filters by eyeballing them but I'll need to experiment with them in a more rigorous manner. I'm not an expert on image filtering. I'll do some metrics and see how they affect speed. I thought maybe for my concern( greater fps analyses over granularity/accuracy) there would be a standard setup.

If going low doesn't give me more frames to use then its not worth it. I'd like to get up and running soon and if the sdk is good enough I'll just do that. In the thread I linked it was suggested to stick with 16bit and not convert pixel data to real distance.

Thanks for the reply. I'll leave it open for a couple of days to see if there is anybody doing what I am that may have extra insights.

Also, I'm glad there is some level of confidence on the ability to do bee tracking. My object is larger and a bit slower. Faster than a mouse though.

@MartyG-RealSense
Copy link
Collaborator

With some computer vision processing activities such as align that may normally be slow, a boost can be gained by using hardware acceleration alongside the camera, such as the Intel Neural Compute Stick 2 that simply plugs into a USB 3.0 port.

https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html

@arothenberg
Copy link
Author

Does that seamlessly interface with the sdk? I'll look into that. Convolutional neural networks might be something I need to learn about. It also seems like it might be too deep to get into.

@arothenberg
Copy link
Author

I looked at the sample video for windows and it's unavailable at youtube. This looks interesting but maybe it's a bit underdeveloped right now(?).

@MartyG-RealSense
Copy link
Collaborator

The Stick is also sold in the store as a bundle deal with a RealSense camera, and there is an object detection example program for it that uses the SDK.

https://github.com/movidius/ncappzoo/tree/master/apps/realsense_object_distance_detection

@arothenberg
Copy link
Author

I'll watch the github repo and maybe I'll look into it.
Thanks.

@arothenberg
Copy link
Author

Maybe when I have developed my project on the d435 to the point it was at with the kinect (previously used) I'll ask the developers of the stick what I might gain. Maybe they would be willing to let me describe in detail what I'm doing and advise me on where I might benefit from their product. I have a specific methodology for tracking my single body and it works, though it was a bit slow on the kinect. I'd only use the stick if there is a speed advantage I could gain.
Thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 10, 2020

The creators of the Stick's tech, Movidius, are an Intel company that have their own independent identity. They have their own website that a message can be left at.

https://www.movidius.com/

@arothenberg
Copy link
Author

Thanks. I'll bookmark that. Maybe I'll write them today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants