Team 9
Our goal for the final project was to have an object detection model that would work concurrently with GPS laps. The main idea is that the robocar would race around in a figure 8. While racing, the robot would have a model that would be able to detect obstacles. When an obstacle was detected, it would stop. When the robot no longer was able to detect an obstacle, it would then begin to drive again as normal.
The original plan was to use an ROS2 implementation. However, due to time constraints, we decided to just modify the internal workings of the donkeycar itself
The Must Haves for the Project:
- The ability for the robot to react to a model detecting an obstacle, mainly by stopping.
- The ability to run GPS laps
- A model for the robot to run
The Nice Haves for the Project:
- The ability to turn around an object based on closeness
- An accurate model that can detect a robocar
- The ability for the car to quickly detect and react to obstacles.
Summary:We managed to get the GPS running on the docker container, along with having the robot react to the model detection. However, we were unable to get those two working together at the same time.
Here is a video of the robocar reacting to the model
I would say a future stretch goal would be to add a functionality that after stopping, forces the robot to move around it. In addition, a more robust model would be appreciated.
Autonomous lap
<link>https://youtube.com/shorts/MIYmDNecfd0?feature=share</link>
<p>GPS lap</p>
<link>https://youtu.be/w0lJrg6fRcE?feature=shared</link>
<p>OpenCV lap</p>
<link>https://youtu.be/xnzpPU3z34I?feature=shared</link>
The current model implementation requires depthai and the dephaisdk dependencies to be upgraded(depthai 2.28, sdk 1.15). In order to get this implementation of the model running, it is recommended to have the model itself run on the donkeycontainer
You need to mount the donkeycar directory inside a new container. When you do so, you need to then check to see if GPS laps can be run in the docker container
The model that was created would be directly imported from roboflow. The detector callback function would return a true false statement, which would decide whether the robot detects it or not.
You would need to import the file that runs the model(in this case the camerausb) and then make some changes to acutator.py.
The implementation relies on the need to have an thread be created. The if statement is there so only one thread is made. The while loop will then set the angle and throttle to 0 in order to stop the robot when the model detects something.
Electrical Diagram Revised Explanation: The RoboCar or robot car is connected to a power supply in the form of a LiPo battery of at least 14.8 V that provides power to the entire car. The distribution of power across the entire car can be controlled with the VESC switch that powers on or off the car. When the VESC switch is turned on, current flows through the anti-spark switch that prevents sparking of the car, and then into the buck converter that steps down the DC voltage to a safe level. The DC-DC buck converter is crucial in protecting all hardware components of the car, preventing overflow of current causing a short in the entire car that would cease the car's ability to function. The DC barrel jack redirects stepped-down current coming out from the buck converter to the Jetson, allowing the Jetson to turn on. The Jetson then powers the GPS Nav1, the OAK-D camera, and the VESC steering control. The Jetson itself allows the car to collect signals from the Earth to determine location accurately, to see objects in front of the car, and to power the steering control that helps drive and turn the car. The motor or throttle of the car provides engine power to the VESC steering control, enabling the car to move forwards, backwards, and steering motions.
Orange lidar mount:https://cad.onshape.com/documents/ac0c79804f0119ca016675cd/w/fb4d21fe24ad051bae48313e/e/ca930eb42fde4e6012a60675?renderMode=0&uiState=67e05ebe48d9c8025a8743b5
Green camera holder:https://cad.onshape.com/documents/681a28c8048787a752c53829/w/c6ccd98731fac958f389a55a/e/18c55d35ca7f74e7667ec29d?renderMode=0&uiState=67e05e775fa55176d8f6e7b8
Thanks to Professor Jack Siberman and the TA's Alexander, Winston, and Vivekanand.