Skip to content

PiotrG1996/jetson-installation

Repository files navigation

Nvidia Jetson with ROS for Computer Vision

Build Jetson Images Build and Deploy Jetson Image ROS Humble YOLO Jetson Nano

Table of Contents

  1. πŸš€ Nvidia Jetson Boards - Motivation
  2. πŸ› οΈ Supported JETSON Boards
  3. πŸ“‹ Specification
  4. πŸ”¨ Build the Jetson Image Yourself
  5. πŸ’Ύ Flashing the Image into Your Board
  6. πŸ“š Nvidia Libraries
  7. 🐳 Jetson Nano Docker
  8. πŸ€– Jetson Nano with ROS2
  9. 🧠 Jetson ROS2 with YOLO
  10. πŸ“– How to Use Jetson Copilot

Nvidia Jetson Boards - motivation

The development of minimalist images for Nvidia Jetson boards addresses the challenge posed by the large size and excessive pre-installed packages of official Jetson images. These packages often consume significant disk space and memory, which can be detrimental to performance in resource-constrained environments. These minimalist images aim to provide a streamlined alternative, optimizing both space and resource utilization.

Supported JETSON boards official source

βœ… Jetson Nano / Jetson Nano 2GB

βœ… Jetson Orin Nano

βœ… Jetson AGX Xavier

βœ… Jetson Xavier NX

Specification

Supported Ubuntu releases: 20.04, 22.04, 24.04

L4T versions: 32.x, 35.x, 36.x

Important

For jetson orin nano, you might need to update the firmware before being able to use an image based on l4t 36.x

check this link for more information.

Build the jetson image yourself

Note

Building the jetson image has been tested on Linux machines.

Building the jetson image is fairly easy. All you need to have is the following tools installed on your machine.

Start by cloning the repository from github

git clone https://github.com/PiotrG1996/jetson-installation
cd jetson-image/jetson-standard-images

Then create a new rootfs with the desired ubuntu version.

Note

Only the orin family boards can use ubuntu 24.04

For ubuntu 22.04

just build-jetson-rootfs 20.04

This will create the rootfs in the rootfs directory.

Tip

You can modify the Containerfile.rootfs.* files to add any tool or configuration that you will need in the final image.

Next, use the following command to build the Jetson image:

$ just build-jetson-image -b <board> -r <revision> -d <device> -l <l4t version>

Tip

If you wish to add some specific nvidia packages that are present in the common section from this link such as libcudnn8 for instance, then edit the filel4t_packages.txt in the root directory, add list each package name on separate line.

For example, to build an image for jetson-orin-nano board:

$ just build-jetson-image -b jetson-orin-nano -d SD -l 36

Run with -h for more information

just build-jetson-image -h

Note

Not every jetson board can be updated to the latest l4t version.

Check this link for more information.

The Jetson image will be built and saved in the current directory in a file named jetson.img

Flashing the image into your board

To flash the jetson image, just run the following command:

$ sudo just flash-jetson-image <jetson image file> <device>

Where device is the name of the sdcard/usb identified by your system. For instance, if your sdard is recognized as /dev/sda, then replace device by /dev/sda

Note

There are numerous tools out there to flash images to sd card that you can use. I stick with dd as it's simple and does the job.

Nvidia Libraries

Once you boot the board with the new image, then you can install Nvidia libraries using apt

$ sudo apt install -y libcudnn8 libcudnn8-dev ...

Jetson Nano Docker

Note

This is an modified nvcr.io/nvidia/l4t-base:r32.7.1 container. The container has been modified by upgrading core Ubuntu 18.04 to Ubuntu 20.04.

Tip

dusty-nv/jetson-containers allows building containers for Jetson nano but they are based on offical nvcr.io/nvidia/l4t-base:r32.7.1 which is based on Ubuntu 18.04 and is limited by Python 3.6.9.

Ubuntu 22.04 was also attempted, but later abandoned due to lack of support for gcc-8, g++8 and clang-8 required by CUDA 10.2 in r32.7.1

Docker buildx for ARM64 platform (for AMD64 systems)

Run the following command on an AMD64 computer to setup buildx to build arm64 docker containers:

docker buildx create --use --driver-opt network=host --name MultiPlatform --platform linux/arm64

Docker container list

Jetson Ubuntu Foxy Base Image

  • Size is about 822 MB
  • Contains,
    • Python 3.8.10

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-base:r32.7.1

Build the docker container

docker buildx build --load --platform linux/arm64 -f base-images/foxy.Dockerfile -t foxy-base:r32.7.1 .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-base:r32.7.1 bash

Jetson Ubuntu Foxy Minimal Image

  • Size is about 1.11GB
  • Contains,
    • Python 3.8.10
    • GCC-8, G++-8 (for building CUDA 10.2 related applications)
    • build-essential package (g++-9, gcc-9, make, dpkg-dev, libc6-dev)

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-minimal:r32.7.1

Build the docker container

docker buildx build --load --platform linux/arm64 -f test-images/foxy_test.Dockerfile -t foxy-minimal:r32.7.1 .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-minimal:r32.7.1 bash

Test

Run the following commands inside the docker container to test the nvcc and other jetson nano specific functionality

/usr/local/cuda-10.2/bin/cuda-install-samples-10.2.sh .
cd /NVIDIA_CUDA-10.2_Samples/1_Utilities/deviceQuery
make clean
make HOST_COMPILER=/usr/bin/g++-8
./deviceQuery

Jetson ROS Humble Core Image

  • Size is about 1.71GB
  • Contains,
    • Python 3.8.10
    • build-essential package (g++-9, gcc-9, make, dpkg-dev, libc6-dev)
    • ROS Humble Core packages

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-ros:humble-core-r32.7.1

Build the docker container

docker buildx build --load --platform linux/arm64 -f ros-images/humble_core.Dockerfile -t foxy-ros:humble-core-r32.7.1 .

or build with cache locally and push when image compilation can be slow on github actions and exceeds 6rs

docker buildx build --push \
                    --platform linux/arm64 \
                    --cache-from=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-core-buildcache \
                    --cache-to=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-core-buildcache,mode=max  \
                    -f ros-images/humble_core.Dockerfile  \
                    -t ghcr.io/kalanaratnayake/foxy-ros:humble-core-r32.7.1 .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-ros:humble-core-r32.7.1 bash

Test

Run the following commands inside the docker container to confirm that the container is working properly

ros2 run demo_nodes_cpp talker

Run the following commands on another instance of ros container or another Computer/Jetson device installed with ROS humble to check connectivity over host network and discoverability (while the above command is running).

ros2 run demo_nodes_py listener

Jetson ROS Humble Base Image

  • Size is about 1.76GB
  • Contains,
    • Python 3.8.10
    • build-essential package (g++-9, gcc-9, make, dpkg-dev, libc6-dev)
    • ROS Humble Base packages

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-ros:humble-base-r32.7.1

Build the docker container

docker buildx build --load --platform linux/arm64 -f ros-images/humble_base.Dockerfile -t foxy-ros:humble-base-r32.7.1 .

or build with cache locally and push when image compilation can be slow on github actions and exceeds 6rs

docker buildx build --push \
                    --platform linux/arm64 \
                    --cache-from=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-base-buildcache \
                    --cache-to=type=registry,ref=ghcr.io/kalanaratnayake/foxy-ros:humble-ros-base-buildcache,mode=max  \
                    -f ros-images/humble_base.Dockerfile  \
                    -t ghcr.io/kalanaratnayake/foxy-ros:humble-base-r32.7.1 .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-ros:humble-base-r32.7.1 bash

Test

Run the following commands inside the docker container to confirm that the container is working properly.

ros2 run demo_nodes_cpp talker

Run the following commands on another instance of ros container or another Computer/Jetson device installed with ROS humble to check connectivity over host network and discoverability (while the above command is running).

ros2 run demo_nodes_py listener

Jetson Ubuntu Foxy Pytorch 1.13 Image

  • Size is about 1.83GB
  • Contains,
    • Python 3.8.10
    • PyTorch 1.13.0
    • TorchVision 0.14.0

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-pytorch:1-13-r32.7.1

Build the docker container

docker buildx build --load --platform linux/arm64 -f pytorch-images/foxy_pytorch_1_13.Dockerfile -t foxy-pytorch:1-13-r32.7.1 .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-pytorch:1-13-r32.7.1 bash

Test

Run the following commands inside the docker container to confirm that the container is working properly.

python3 -c "import torch; print(torch.__version__)"
python3 -c "import torchvision; print(torchvision.__version__)"

Jetson Ubuntu Foxy Pytorch 1.13 with TensorRT Image

  • Size is about 1.83GB
  • Contains,
    • Python 3.8.10
    • PyTorch 1.13.0
    • TorchVision 0.14.0

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-pytorch:1-13-tensorrt-j-nano

Build the docker container

docker buildx build --load --platform linux/arm64 -f pytorch-images/foxy_pytorch_1_13.Dockerfile -t foxy-pytorch:1-13-tensorrt-j-nano .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-pytorch:1-13-tensorrt-j-nano bash

Test

Run the following commands inside the docker container to confirm that the container is working properly.

python3 -c "import torch; print(torch.__version__)"
python3 -c "import torchvision; print(torchvision.__version__)"
python3 -c "import tensorrt as trt; print(trt.__version__)"
dpkg -l | grep TensorRT

Jetson Ubuntu Foxy Humble Core Pytorch 1.13 Image

  • Size is about 3.05GB
  • Contains,
    • Python 3.8
    • PyTorch 1.13.0
    • TorchVision 0.14.0
    • ROS Humble Core packages

Pull or Build

Pull the docker container

docker pull ghcr.io/kalanaratnayake/foxy-ros-pytorch:1-13-humble-core-r32.7.1

Build the docker container

docker buildx build --load --platform linux/arm64 -f ros-pytorch-images/humble_core_pytorch_1_13.Dockerfile -t foxy-ros-pytorch:1-13-humble-core-r32.7.1 .

Start

Start the docker container

docker run --rm -it --runtime nvidia --network host --gpus all -e DISPLAY ghcr.io/kalanaratnayake/foxy-ros-pytorch:1-13-humble-core-r32.7.1 bash

Test

Run the following commands inside the docker container to confirm that the container is working properly.

python3 -c "import torch; print(torch.__version__)"
python3 -c "import torchvision; print(torchvision.__version__)"

Run the following commands inside the docker container to confirm that the container is working properly.

ros2 run demo_nodes_cpp talker

Run the following commands on another instance of ros container or another Computer/Jetson device installed with ROS humble to check connectivity over host network and discoverability (while the above command is running).

ros2 run demo_nodes_py listener

Jetson Nano with ROS2

Setup

System Setup Comparison

Power Supply

  • πŸ”Œ Micro USB Power Supply: Provides 5V and up to 2A (10W maximum). This power configuration is adequate for basic peripherals such as a keyboard, mouse, and a small camera.
  • πŸ”‹ DC Barrel Jack Power Supply: Provides 5V and up to 4A (20W maximum). This option is recommended for scenarios involving intensive tasks such as running Neural Networks or using depth cameras, as it ensures enhanced power stability and reliability.

Configuration Overview

This section details various configurations of Ubuntu and ROS2, tested with and without a graphical user interface (GUI), as well as with and without Docker. The objective is to identify the most stable setup that supports the latest ROS version and optimizes the performance of the Jetson Nano.

  • πŸ–₯️ GUI Availability: The "GUI" column specifies whether a graphical user interface is present in the configuration. Configurations without a GUI were achieved by removing all GUI-related components. For instructions on how to remove the GUI, refer to relevant tutorials.

  • πŸ“Š Idle RAM: Measurements are provided to evaluate the maximum size of Neural Network models that can be accommodated on the device.

  • 🐳 Docker Configurations: In setups utilizing Docker, Idle RAM was measured while the base ROS Docker image was operational.

  • βš™οΈ Overclocking Settings: Tests were conducted with the default overclocking settings. For information on customizing overclocking settings, please consult the overclocking guide.

  • Docker ROS-Humble-ROS-Base can be installed with the following command:

    docker pull dustynv/ros:humble-ros-base-l4t-r32.7.1

Ubuntu Jetpack CUDA ROS GUI Docker CPU / GPU Frequency Idle RAM (GB) Tutorial
20.04 4.6.2 10.2 Humble Yes Yes 1900Mz / 998Mz 1.3 / 3.9 Image Tutorial
20.04 4.6.2 10.2 Humble No Yes 1900Mz / 998Mz 0.44 / 3.9 Image Tutorial
20.04 4.6.2 10.2 Foxy Yes No 1900Mz / 998Mz 1.2 / 3.9 Image Tutorial
20.04 4.6.2 10.2 Foxy No No 1900Mz / 998Mz 0.40 / 3.9 Image Tutorial
18.04 4.6.4 10.2 Humble Yes Yes 1479Mz / 920Mz Really Slow Not Successful
18.04 4.6.4 10.2 Humble No Yes 1479Mz / 920Mz Really slow Not Successful

Jetson ROS2 with YOLO

Docker Usage by adding to compose.yml file

To use GPU with docker while on AMD64 systems, install nvidia-container-toolkit with given instructions.

Supported platforms

System ROS Version Value for image Value for device Size file
AMD64 Humble ghcr.io/kalanaratnayake/yolo-ros:humble cpu, 0, 0,1,2 5.64 GB docker/compose.amd64.yaml
Jetson Nano Humble ghcr.io/kalanaratnayake/yolo-ros:humble-j-nano cpu, 0 3.29GB docker/compose.jnano.yaml

Docker Usage with this repository

Clone this reposiotory

mkdir -p yolo_ws/src && cd yolo_ws/src
git clone https://github.com/PiotrG1996/jetson-installation.git && cd jetson-ROS-YOLO
cd ..

on AMD64

Pull the Docker image and start compose (No need to run docker compose build)

cd src/yolo_ros/docker
docker compose -f compose.amd64.yaml pull
docker compose -f compose.amd64.yaml up

To clean the system,

cd src/yolo_ros/docker
docker compose -f compose.amd64.yaml down
docker volume rm docker_yolo

on JetsonNano

Pull the Docker image and start compose (No need to run docker compose build)

cd src/yolo_ros/docker
docker compose -f compose.jnano.yaml pull
docker compose -f compose.jnano.yaml up

To clean the system,

cd src/yolo_ros/docker
docker compose -f compose.jnano.yaml down
docker volume rm docker_yolo

Native Usage

Clone this repository with and install dependencies.

git clone https://github.com/PiotrG1996/jetson-installation/jetson-ROS-YOLO.git
cd jetson-ROS-YOLO
pip3 install -r requirements.txt

Build the package

If required, edit the parameters at `config/yolo_ros_params.yaml' and then at the workspace root run,

colcon build

Start the system

To use the launch file, run,

source ./install/setup.bash
ros2 launch yolo_ros yolo.launch.py


Parameter decription

ROS Parameter Docker ENV parameter Default Value Description
yolo_model YOLO_MODEL yolov9t.pt Model to be used. see [1] for default models and [2] for custom models
subscribe_depth SUBSCRIBE_DEPTH True Whether to subscribe to depth image or not. Use if having a depth camera. A ApproximateTimeSynchronizer is used to sync RGB and Depth images
input_rgb_topic INPUT_RGB_TOPIC /camera/color/image_raw Topic to subscribe for RGB image. Accepts sensor_msgs/Image
input_depth_topic INPUT_DEPTH_TOPIC /camera/depth/points Topic to subscribe for Depth image. Accepts sensor_msgs/Image
publish_detection_image PUBLISH_ANNOTATED_IMAGE False Whether to publish annotated image, increases callback execution time when set to True
annotated_topic ANNOTATED_TOPIC /yolo_ros/annotated_image Topic for publishing annotated images uses sensor_msgs/Image
detailed_topic DETAILED_TOPIC /yolo_ros/detection_result Topic for publishing detailed results uses yolo_ros_msgs/YoloResult
threshold THRESHOLD 0.25 Confidence threshold for predictions
device DEVICE '0' cpu for CPU, 0 for gpu, 0,1,2,3 if there are multiple GPUs

Note

If the model is available at ultralytics models, It will be downloaded from the cloud at the startup. We are using docker volumes to maintain downloaded weights so that weights are not downloaded at each startup.

Tip

Uncomment the commented out YOLO_MODEL parameter line and give the custom model weight file's name as YOLO_MODEL parameter. Uncomment the docker bind entry that to direct to the weights folder and comment the docker volume entry for yolo. Copy the custom weights to the weights folder.

Latency description

Here is a summary of whether latest models work with yolo_ros node (in docker) on various platforms and the time it takes to execute a single interation of YoloROS.image_callback function. Values are measured as an average of 100 executions of the function and Input is a 640x480 RGB image at 30 fps.

Performance Metrics

Model Jetson Nano (ms) Jetson Nano (FPS)
yolov10x.pt 975 ms 1.03 FPS
yolov10l.pt 800 ms 1.25 FPS
yolov10b.pt 750 ms 1.33 FPS
yolov10m.pt 650 ms 1.54 FPS
yolov10s.pt 210 ms 4.76 FPS
yolov10n.pt 140 ms 7.14 FPS
yolov9e.pt 1600 ms 0.62 FPS
yolov9c.pt 700 ms 1.43 FPS
yolov9m.pt 500 ms 2.00 FPS
yolov9s.pt 300 ms 3.33 FPS
yolov9t.pt 180 ms 5.56 FPS
yolov8x.pt 2000 ms 0.50 FPS
yolov8l.pt 1200 ms 0.83 FPS
yolov8m.pt 700 ms 1.43 FPS
yolov8s.pt 300 ms 3.33 FPS
yolov8n.pt 140 ms 7.14 FPS

Jetson Copilot Offline Setup Guide

πŸƒ Getting Started

First Time Setup

To set up Jetson Copilot for the first time, follow these steps to ensure that all necessary software is installed and the environment is properly configured.

  1. Clone the Jetson Copilot repository:

    git clone https://github.com/NVIDIA-AI-IOT/jetson-copilot/
  2. Navigate to the cloned directory:

    cd jetson-copilot
  3. Run the setup script:

    ./setup_environment.sh

This script will install the following components if they are not already present on your system:

  • Chromium web browser
  • Docker

How to Start Jetson Copilot

  1. Navigate to the Jetson Copilot directory:

    cd jetson-copilot
  2. Launch Jetson Copilot:

    ./launch_jetson_copilot.sh

This command will start a Docker container, which will then start an Ollama server and a Streamlit app inside the container. The console will display a URL for accessing the web app hosted on your Jetson device.

  1. Open the web app:
    • On your Jetson device: Open Local URL in your web browser.
    • On a PC connected to the same network as your Jetson: Access the Network URL.

Note

  • An internet connection is required on the Jetson device during the first launch to pull the container image and download the default LLM and embedding model.
  • The first time you access the web UI, it will download the default LLM (Llama3) and the embedding model (mxbai-embed-large).

Tips

  • On Ubuntu Desktop, a frameless Chromium window will pop up to access the web app, making it look like an independent application. Ensure to close this window manually if you stop the container from the console, as it won’t automatically close Chromium.

πŸ“– How to Use Jetson Copilot

Interact with the Plain Llama3 (8 billion parameters)

By default, Jetson Copilot uses the Llama3 (8b) model as the default LLM. You can interact with this model without enabling the RAG (Retrieve and Generate) feature.

1. Ask Jetson-Related Questions Using Pre-Built Index

  1. On the side panel, toggle "Use RAG" to enable the RAG pipeline.
  2. Select a custom knowledge/index from the "Index" dropdown.

A pre-built index "_L4T_README" is available and includes all README text files from the "L4T-README" folder on your Jetson device.

To access the L4T-README folder: bash udisksctl mount -b /dev/disk/by-label/L4T-README

You can ask questions related to Jetson specifics, such as:

  • What IP address does Jetson get assigned when connected to a PC via a USB cable in USB Device Mode?

2. Build Your Own Index Based on Your Documents

  1. Create a directory under Documents to store your documents:

    cd jetson-copilot
    mkdir Documents/Jetson-Orin-Nano
    cd Documents/Jetson-Orin-Nano
    wget https://developer.nvidia.com/downloads/assets/embedded/secure/jetson/orin_nano/docs/jetson_orin_nano_devkit_carrier_board_specification_sp.pdf
  2. In the web UI, open the side bar, toggle "Use RAG," and click "βž• Build a new index" to open the "Build Index" page.

  3. Name your index (e.g., "JON Carrier Board") and specify the path for the index directory.

  4. Select the directory you created (e.g., /opt/jetson_copilot/Documents/Jetson-Orin-Nano) or enter URLs for online documents if needed.

  5. Ensure that mxbai-embed-large is selected for the embedding model. Note that OpenAI embedding models are not well-supported and may require additional testing.

  6. Click "Build Index" and monitor the progress in the status container. Once completed, you can select your newly built index from the home screen.

3. Test Different LLM or Embedding Models

This section is TODO and will be updated with instructions for testing different LLMs and embedding models.

πŸ—οΈ Development

Developing your Streamlit-based web app is straightforward:

  1. Enable automatic updates of the app every time you change the source code by selecting "Always rerun" in the web UI.

  2. For more fundamental changes, manually run the Streamlit app inside the container:

    cd jetson-copilot
    ./launch_dev.sh

    Once inside the container:

    streamlit run app.py

🧱 Directory Structure

Here's an overview of the directory structure:

└── jetson-copilot
    β”œβ”€β”€ launch_jetson_copilot.sh
    β”œβ”€β”€ setup_environment.sh
    β”œβ”€β”€ Documents 
    β”‚   └── your_abc_docs
    β”œβ”€β”€ Indexes
    β”‚   β”œβ”€β”€ _L4T_README
    β”‚   └── your_abc_index
    β”œβ”€β”€ logs
    β”‚   β”œβ”€β”€ container.log
    β”‚   └── ollama.log
    β”œβ”€β”€ ollama_models
    └── Streamlit_app
        β”œβ”€β”€ app.py
        β”œβ”€β”€ build_index.py
        └── download_model.py

References

Here are the references and resources used in the project:

  1. Qengineering - Link to GitHub
  2. Pythops - Link to GitHub