-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to extract depth data in real time? #4026
Comments
yes I also have same doubt in python. How can I extract it |
Is there anyone from the Intel RealSense team or anyone else in the community who can help? Have been waiting for a long time now. |
@Ahad19931 hello, It is not clear though what do you mean exactly by I would recommend checking the examples chapter which briefly describes the features each example demonstrates and also categorizes them according to experience level. |
Hi @ev-mp , |
@Ahad19931 hello, auto frames = pipe.wait_for_frames();
if (auto depth_frame = frames.get_depth_frame())
{
auto pixels_buffer = depth_frame.get_data();`
// Write your code here
} The resulted It is possible to convert the frame first from pixels to meters to make it But as a general rule I would recommend working with the raw depth data, whenever possible, instead of using converted metric or any other arbitrary units.
Last but not least there are additional cases such as point-cloud/ surface reconstruction and/or depth alignment/registration use-cases that can be facilitated via SDK-provided tools/APIs. |
Hi @ev-mp , Regarding the units_transform block, isn't it the same as what we are getting after running the rs-hello-realsense example, because with that example, we get the distance in meters too? As you recommended that working with raw depth data is a better option, can you please explain about what you exactly meant by raw depth data? Is it the depth stream that we are getting using the realsense-viewer? Looking forward to your reply, Thanks |
@Ahad19931 hello,
|
Hi @ev-mp , |
@Ahad19931 , there is no need for header files other than ...
rs2::units_transform ut;
rs2::pipeline pipe;
pipe.start();
while (...)
{
rs2::frameset data = pipe.wait_for_frames();
if (auto depth_raw_frame = data.get_depth_frame())
{
auto depth_metric_frame = ut.process(depth_raw_frame);
// Add your code here
}
} |
@Ahad19931 Any update on this? |
Hi, |
| Required Info
|---------------------------------|------------------------------------------- |
| Camera Model | {D435i } |
| Firmware Version | (05.11.06.200) |
| Operating System & Version | Linux (Ubuntu 16.04) |
| Kernel Version (Linux Only) | (4.15.0-50-generic) |
| Platform | PC |
| SDK Version | { 2.21.0 } |
| Language | {C++ } |
| Segment | {Robot} |
How can I extract the depth data from the camera during the live streaming. The main purpose of this is to perform obstacle avoidance using this depth data when the camera is mounted on a mobile robot.
I was able to extract the depth data from a recorded file using the rs-convert tool by converting the data into csv and raw files but have no idea about how to do it in real time.
Any assistance will be highly appreciated.
The text was updated successfully, but these errors were encountered: