- π Nvidia Jetson Boards - Motivation
- π οΈ Supported JETSON Boards
- π Specification
- π¨ Build the Jetson Image Yourself
- πΎ Flashing the Image into Your Board
- π Nvidia Libraries
- π³ Jetson Nano Docker
- π€ Jetson Nano with ROS2
- π§ Jetson ROS2 with YOLO
- π How to Use Jetson Copilot
The development of minimalist images for Nvidia Jetson boards addresses the challenge posed by the large size and excessive pre-installed packages of official Jetson images. These packages often consume significant disk space and memory, which can be detrimental to performance in resource-constrained environments. These minimalist images aim to provide a streamlined alternative, optimizing both space and resource utilization.
Supported JETSON boards official source
β Jetson Nano / Jetson Nano 2GB
β Jetson Orin Nano
β Jetson AGX Xavier
β Jetson Xavier NX
Supported Ubuntu releases: 20.04, 22.04, 24.04
L4T versions: 32.x, 35.x, 36.x
Important
For jetson orin nano, you might need to update the firmware before being able to use an image based on l4t 36.x
check this link for more information.
Note
Building the jetson image has been tested on Linux machines.
Building the jetson image is fairly easy. All you need to have is the following tools installed on your machine.
Start by cloning the repository from github
git clone https://github.com/PiotrG1996/jetson-installation
cd jetson-image/jetson-standard-images
Then create a new rootfs with the desired ubuntu version.
Note
Only the orin family boards can use ubuntu 24.04
For ubuntu 22.04
just build-jetson-rootfs 20.04
This will create the rootfs in the rootfs
directory.
Tip
You can modify the Containerfile.rootfs.*
files to add any tool or configuration that you will need in the final image.
Next, use the following command to build the Jetson image:
$ just build-jetson-image -b <board> -r <revision> -d <device> -l <l4t version>
Tip
If you wish to add some specific nvidia packages that are present in the common
section from this link
such as libcudnn8
for instance, then edit the filel4t_packages.txt
in the root directory, add list each package name on separate line.
For example, to build an image for jetson-orin-nano
board:
$ just build-jetson-image -b jetson-orin-nano -d SD -l 36
Run with -h
for more information
just build-jetson-image -h
Note
Not every jetson board can be updated to the latest l4t version.
Check this link for more information.
The Jetson image will be built and saved in the current directory in a file named jetson.img
To flash the jetson image, just run the following command:
$ sudo just flash-jetson-image <jetson image file> <device>
Where device
is the name of the sdcard/usb identified by your system.
For instance, if your sdard is recognized as /dev/sda
, then replace device
by /dev/sda
Note
There are numerous tools out there to flash images to sd card that you can use. I stick with dd
as it's simple and does the job.
Once you boot the board with the new image, then you can install Nvidia libraries using apt
$ sudo apt install -y libcudnn8 libcudnn8-dev ...
Note
This is an modified nvcr.io/nvidia/l4t-base:r32.7.1 container. The container has been modified by upgrading core Ubuntu 18.04 to Ubuntu 20.04.
Tip
dusty-nv/jetson-containers allows building containers for Jetson nano but they are based on offical nvcr.io/nvidia/l4t-base:r32.7.1 which is based on Ubuntu 18.04 and is limited by Python 3.6.9.
Ubuntu 22.04 was also attempted, but later abandoned due to lack of support for gcc-8, g++8 and clang-8 required by CUDA 10.2 in r32.7.1
Run the following command on an AMD64 computer to setup buildx to build arm64 docker containers:
docker buildx create --use --driver-opt network=host --name MultiPlatform --platform linux/arm64
- Size is about 822 MB
- Contains,
- Python 3.8.10
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-base:r32.7.1
Build the docker container
docker buildx build --load --platform linux/arm64 -f base-images/foxy.Dockerfile -t foxy-base:r32.7.1 .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-base:r32.7.1 bash
- Size is about 1.11GB
- Contains,
- Python 3.8.10
- GCC-8, G++-8 (for building CUDA 10.2 related applications)
- build-essential package (g++-9, gcc-9, make, dpkg-dev, libc6-dev)
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-minimal:r32.7.1
Build the docker container
docker buildx build --load --platform linux/arm64 -f test-images/foxy_test.Dockerfile -t foxy-minimal:r32.7.1 .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-minimal:r32.7.1 bash
Run the following commands inside the docker container to test the nvcc and other jetson nano specific functionality
/usr/local/cuda-10.2/bin/cuda-install-samples-10.2.sh .
cd /NVIDIA_CUDA-10.2_Samples/1_Utilities/deviceQuery
make clean
make HOST_COMPILER=/usr/bin/g++-8
./deviceQuery
- Size is about 1.71GB
- Contains,
- Python 3.8.10
- build-essential package (g++-9, gcc-9, make, dpkg-dev, libc6-dev)
- ROS Humble Core packages
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-ros:humble-core-r32.7.1
Build the docker container
docker buildx build --load --platform linux/arm64 -f ros-images/humble_core.Dockerfile -t foxy-ros:humble-core-r32.7.1 .
or build with cache locally and push when image compilation can be slow on github actions and exceeds 6rs
docker buildx build --push \
--platform linux/arm64 \
--cache-from=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-core-buildcache \
--cache-to=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-core-buildcache,mode=max \
-f ros-images/humble_core.Dockerfile \
-t ghcr.io/kalanaratnayake/foxy-ros:humble-core-r32.7.1 .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-ros:humble-core-r32.7.1 bash
Run the following commands inside the docker container to confirm that the container is working properly
ros2 run demo_nodes_cpp talker
Run the following commands on another instance of ros container or another Computer/Jetson device installed with ROS humble to check connectivity over host network and discoverability (while the above command is running).
ros2 run demo_nodes_py listener
- Size is about 1.76GB
- Contains,
- Python 3.8.10
- build-essential package (g++-9, gcc-9, make, dpkg-dev, libc6-dev)
- ROS Humble Base packages
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-ros:humble-base-r32.7.1
Build the docker container
docker buildx build --load --platform linux/arm64 -f ros-images/humble_base.Dockerfile -t foxy-ros:humble-base-r32.7.1 .
or build with cache locally and push when image compilation can be slow on github actions and exceeds 6rs
docker buildx build --push \
--platform linux/arm64 \
--cache-from=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-base-buildcache \
--cache-to=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-base-buildcache,mode=max \
-f ros-images/humble_base.Dockerfile \
-t ghcr.io/kalanaratnayake/foxy-ros:humble-base-r32.7.1 .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-ros:humble-base-r32.7.1 bash
Run the following commands inside the docker container to confirm that the container is working properly.
ros2 run demo_nodes_cpp talker
Run the following commands on another instance of ros container or another Computer/Jetson device installed with ROS humble to check connectivity over host network and discoverability (while the above command is running).
ros2 run demo_nodes_py listener
- Size is about 1.83GB
- Contains,
- Python 3.8.10
- PyTorch 1.13.0
- TorchVision 0.14.0
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-pytorch:1-13-r32.7.1
Build the docker container
docker buildx build --load --platform linux/arm64 -f pytorch-images/foxy_pytorch_1_13.Dockerfile -t foxy-pytorch:1-13-r32.7.1 .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-pytorch:1-13-r32.7.1 bash
Run the following commands inside the docker container to confirm that the container is working properly.
python3 -c "import torch; print(torch.__version__)"
python3 -c "import torchvision; print(torchvision.__version__)"
- Size is about 1.83GB
- Contains,
- Python 3.8.10
- PyTorch 1.13.0
- TorchVision 0.14.0
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-pytorch:1-13-tensorrt-j-nano
Build the docker container
docker buildx build --load --platform linux/arm64 -f pytorch-images/foxy_pytorch_1_13.Dockerfile -t foxy-pytorch:1-13-tensorrt-j-nano .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-pytorch:1-13-tensorrt-j-nano bash
Run the following commands inside the docker container to confirm that the container is working properly.
python3 -c "import torch; print(torch.__version__)"
python3 -c "import torchvision; print(torchvision.__version__)"
python3 -c "import tensorrt as trt; print(trt.__version__)"
dpkg -l | grep TensorRT
- Size is about 3.05GB
- Contains,
- Python 3.8
- PyTorch 1.13.0
- TorchVision 0.14.0
- ROS Humble Core packages
Pull the docker container
docker pull ghcr.io/kalanaratnayake/foxy-ros-pytorch:1-13-humble-core-r32.7.1
Build the docker container
docker buildx build --load --platform linux/arm64 -f ros-pytorch-images/humble_core_pytorch_1_13.Dockerfile -t foxy-ros-pytorch:1-13-humble-core-r32.7.1 .
Start the docker container
docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-ros-pytorch:1-13-humble-core-r32.7.1 bash
Run the following commands inside the docker container to confirm that the container is working properly.
python3 -c "import torch; print(torch.__version__)"
python3 -c "import torchvision; print(torchvision.__version__)"
Run the following commands inside the docker container to confirm that the container is working properly.
ros2 run demo_nodes_cpp talker
Run the following commands on another instance of ros container or another Computer/Jetson device installed with ROS humble to check connectivity over host network and discoverability (while the above command is running).
ros2 run demo_nodes_py listener
- π Micro USB Power Supply: Provides 5V and up to 2A (10W maximum). This power configuration is adequate for basic peripherals such as a keyboard, mouse, and a small camera.
- π DC Barrel Jack Power Supply: Provides 5V and up to 4A (20W maximum). This option is recommended for scenarios involving intensive tasks such as running Neural Networks or using depth cameras, as it ensures enhanced power stability and reliability.
This section details various configurations of Ubuntu and ROS2, tested with and without a graphical user interface (GUI), as well as with and without Docker. The objective is to identify the most stable setup that supports the latest ROS version and optimizes the performance of the Jetson Nano.
-
π₯οΈ GUI Availability: The "GUI" column specifies whether a graphical user interface is present in the configuration. Configurations without a GUI were achieved by removing all GUI-related components. For instructions on how to remove the GUI, refer to relevant tutorials.
-
π Idle RAM: Measurements are provided to evaluate the maximum size of Neural Network models that can be accommodated on the device.
-
π³ Docker Configurations: In setups utilizing Docker, Idle RAM was measured while the base ROS Docker image was operational.
-
βοΈ Overclocking Settings: Tests were conducted with the default overclocking settings. For information on customizing overclocking settings, please consult the overclocking guide.
-
Docker ROS-Humble-ROS-Base can be installed with the following command:
docker pull dustynv/ros:humble-ros-base-l4t-r32.7.1
Ubuntu | Jetpack | CUDA | ROS | GUI | Docker | CPU / GPU Frequency | Idle RAM (GB) | Tutorial |
---|---|---|---|---|---|---|---|---|
20.04 | 4.6.2 | 10.2 | Humble | Yes | Yes | 1900Mz / 998Mz | 1.3 / 3.9 | Image Tutorial |
20.04 | 4.6.2 | 10.2 | Humble | No | Yes | 1900Mz / 998Mz | 0.44 / 3.9 | Image Tutorial |
20.04 | 4.6.2 | 10.2 | Foxy | Yes | No | 1900Mz / 998Mz | 1.2 / 3.9 | Image Tutorial |
20.04 | 4.6.2 | 10.2 | Foxy | No | No | 1900Mz / 998Mz | 0.40 / 3.9 | Image Tutorial |
18.04 | 4.6.4 | 10.2 | Humble | Yes | Yes | 1479Mz / 920Mz | Really Slow | Not Successful |
18.04 | 4.6.4 | 10.2 | Humble | No | Yes | 1479Mz / 920Mz | Really slow | Not Successful |
To use GPU with docker while on AMD64 systems, install nvidia-container-toolkit with given instructions.
System | ROS Version | Value for image |
Value for device |
Size | file |
---|---|---|---|---|---|
AMD64 | Humble | ghcr.io/kalanaratnayake/yolo-ros:humble | cpu , 0 , 0,1,2 |
5.64 GB | docker/compose.amd64.yaml |
Jetson Nano | Humble | ghcr.io/kalanaratnayake/yolo-ros:humble-j-nano | cpu , 0 |
3.29GB | docker/compose.jnano.yaml |
Clone this reposiotory
mkdir -p yolo_ws/src && cd yolo_ws/src
git clone https://github.com/PiotrG1996/jetson-installation.git && cd jetson-ROS-YOLO
cd ..
Pull the Docker image and start compose (No need to run docker compose build
)
cd src/yolo_ros/docker
docker compose -f compose.amd64.yaml pull
docker compose -f compose.amd64.yaml up
To clean the system,
cd src/yolo_ros/docker
docker compose -f compose.amd64.yaml down
docker volume rm docker_yolo
Pull the Docker image and start compose (No need to run docker compose build
)
cd src/yolo_ros/docker
docker compose -f compose.jnano.yaml pull
docker compose -f compose.jnano.yaml up
To clean the system,
cd src/yolo_ros/docker
docker compose -f compose.jnano.yaml down
docker volume rm docker_yolo
Clone this repository with and install dependencies.
git clone https://github.com/PiotrG1996/jetson-installation/jetson-ROS-YOLO.git
cd jetson-ROS-YOLO
pip3 install -r requirements.txt
If required, edit the parameters at `config/yolo_ros_params.yaml' and then at the workspace root run,
colcon build
To use the launch file, run,
source ./install/setup.bash
ros2 launch yolo_ros yolo.launch.py
ROS Parameter | Docker ENV parameter | Default Value | Description |
---|---|---|---|
yolo_model | YOLO_MODEL | yolov9t.pt |
Model to be used. see [1] for default models and [2] for custom models |
subscribe_depth | SUBSCRIBE_DEPTH | True |
Whether to subscribe to depth image or not. Use if having a depth camera. A ApproximateTimeSynchronizer is used to sync RGB and Depth images |
input_rgb_topic | INPUT_RGB_TOPIC | /camera/color/image_raw |
Topic to subscribe for RGB image. Accepts sensor_msgs/Image |
input_depth_topic | INPUT_DEPTH_TOPIC | /camera/depth/points |
Topic to subscribe for Depth image. Accepts sensor_msgs/Image |
publish_detection_image | PUBLISH_ANNOTATED_IMAGE | False |
Whether to publish annotated image, increases callback execution time when set to True |
annotated_topic | ANNOTATED_TOPIC | /yolo_ros/annotated_image |
Topic for publishing annotated images uses sensor_msgs/Image |
detailed_topic | DETAILED_TOPIC | /yolo_ros/detection_result |
Topic for publishing detailed results uses yolo_ros_msgs/YoloResult |
threshold | THRESHOLD | 0.25 |
Confidence threshold for predictions |
device | DEVICE | '0' |
cpu for CPU, 0 for gpu, 0,1,2,3 if there are multiple GPUs |
Note
If the model is available at ultralytics models, It will be downloaded from the cloud at the startup. We are using docker volumes to maintain downloaded weights so that weights are not downloaded at each startup.
Tip
Uncomment the commented out YOLO_MODEL
parameter line and give the custom model weight file's name as YOLO_MODEL
parameter. Uncomment the docker bind entry that to direct to the weights
folder and comment the docker volume entry for yolo. Copy the custom weights to the weights
folder.
Here is a summary of whether latest models work with yolo_ros node (in docker) on various platforms and the time it takes to execute a single interation of YoloROS.image_callback
function. Values are measured as an average of 100 executions of the function and Input is a 640x480 RGB image at 30 fps.
Model | Jetson Nano (ms) | Jetson Nano (FPS) |
---|---|---|
yolov10x.pt |
975 ms | 1.03 FPS |
yolov10l.pt |
800 ms | 1.25 FPS |
yolov10b.pt |
750 ms | 1.33 FPS |
yolov10m.pt |
650 ms | 1.54 FPS |
yolov10s.pt |
210 ms | 4.76 FPS |
yolov10n.pt |
140 ms | 7.14 FPS |
yolov9e.pt |
1600 ms | 0.62 FPS |
yolov9c.pt |
700 ms | 1.43 FPS |
yolov9m.pt |
500 ms | 2.00 FPS |
yolov9s.pt |
300 ms | 3.33 FPS |
yolov9t.pt |
180 ms | 5.56 FPS |
yolov8x.pt |
2000 ms | 0.50 FPS |
yolov8l.pt |
1200 ms | 0.83 FPS |
yolov8m.pt |
700 ms | 1.43 FPS |
yolov8s.pt |
300 ms | 3.33 FPS |
yolov8n.pt |
140 ms | 7.14 FPS |
To set up Jetson Copilot for the first time, follow these steps to ensure that all necessary software is installed and the environment is properly configured.
-
Clone the Jetson Copilot repository:
git clone https://github.com/NVIDIA-AI-IOT/jetson-copilot/
-
Navigate to the cloned directory:
cd jetson-copilot
-
Run the setup script:
./setup_environment.sh
This script will install the following components if they are not already present on your system:
- Chromium web browser
- Docker
-
Navigate to the Jetson Copilot directory:
cd jetson-copilot
-
Launch Jetson Copilot:
./launch_jetson_copilot.sh
This command will start a Docker container, which will then start an Ollama server and a Streamlit app inside the container. The console will display a URL for accessing the web app hosted on your Jetson device.
- Open the web app:
- On your Jetson device: Open Local URL in your web browser.
- On a PC connected to the same network as your Jetson: Access the Network URL.
- An internet connection is required on the Jetson device during the first launch to pull the container image and download the default LLM and embedding model.
- The first time you access the web UI, it will download the default LLM (Llama3) and the embedding model (mxbai-embed-large).
- On Ubuntu Desktop, a frameless Chromium window will pop up to access the web app, making it look like an independent application. Ensure to close this window manually if you stop the container from the console, as it wonβt automatically close Chromium.
By default, Jetson Copilot uses the Llama3 (8b) model as the default LLM. You can interact with this model without enabling the RAG (Retrieve and Generate) feature.
- On the side panel, toggle "Use RAG" to enable the RAG pipeline.
- Select a custom knowledge/index from the "Index" dropdown.
A pre-built index "_L4T_README" is available and includes all README text files from the "L4T-README" folder on your Jetson device.
To access the L4T-README folder:
bash udisksctl mount -b /dev/disk/by-label/L4T-README
You can ask questions related to Jetson specifics, such as:
- What IP address does Jetson get assigned when connected to a PC via a USB cable in USB Device Mode?
-
Create a directory under
Documents
to store your documents:cd jetson-copilot mkdir Documents/Jetson-Orin-Nano cd Documents/Jetson-Orin-Nano wget https://developer.nvidia.com/downloads/assets/embedded/secure/jetson/orin_nano/docs/jetson_orin_nano_devkit_carrier_board_specification_sp.pdf
-
In the web UI, open the side bar, toggle "Use RAG," and click "β Build a new index" to open the "Build Index" page.
-
Name your index (e.g., "JON Carrier Board") and specify the path for the index directory.
-
Select the directory you created (e.g.,
/opt/jetson_copilot/Documents/Jetson-Orin-Nano
) or enter URLs for online documents if needed. -
Ensure that
mxbai-embed-large
is selected for the embedding model. Note that OpenAI embedding models are not well-supported and may require additional testing. -
Click "Build Index" and monitor the progress in the status container. Once completed, you can select your newly built index from the home screen.
This section is TODO and will be updated with instructions for testing different LLMs and embedding models.
Developing your Streamlit-based web app is straightforward:
-
Enable automatic updates of the app every time you change the source code by selecting "Always rerun" in the web UI.
-
For more fundamental changes, manually run the Streamlit app inside the container:
cd jetson-copilot ./launch_dev.sh
Once inside the container:
streamlit run app.py
Here's an overview of the directory structure:
βββ jetson-copilot
βββ launch_jetson_copilot.sh
βββ setup_environment.sh
βββ Documents
β βββ your_abc_docs
βββ Indexes
β βββ _L4T_README
β βββ your_abc_index
βββ logs
β βββ container.log
β βββ ollama.log
βββ ollama_models
βββ Streamlit_app
βββ app.py
βββ build_index.py
βββ download_model.py
Here are the references and resources used in the project:
- Qengineering - Link to GitHub
- Pythops - Link to GitHub