jetson nano 2Gb

AIOT toy

Getting Started with AI on Jetson Nano

set up the device

Download image, write into your SD card, first boot setting

check SWAP

 1#  You should see 4071 as the size of the swap if you have 4GB configured
 2free -m
 3
 4# Disable ZRAM:
 5sudo systemctl disable nvzramconfig
 6
 7# Create 4GB swap file
 8sudo fallocate -l 4G /mnt/4GB.swap
 9sudo chmod 600 /mnt/4GB.swap
10sudo mkswap /mnt/4GB.swap
11
12# Append the following line to /etc/fstab
13sudo echo "/mnt/4GB.swap swap swap defaults 0 0" >> /etc/fstab
14
15# REBOOT!

Headless mode

Use jetson nano 2Gb in “USB device mode”, connect the jetson nano to your computer through a USB cable just like a eGPU.

  1. power on, wait 30 seconds, connect with USB cable

  2. 1ssh <username>@192.168.55.1
    
  3. 1mkdir -p ~/nvdli-data
    
  4.  1# create a script to run docker env
     2# check the container's tag here:
     3# https://ngc.nvidia.com/catalog/containers/nvidia:dli:dli-nano-ai
     4echo "sudo docker run --runtime nvidia -it --rm --network host \
     5    --volume ~/nvdli-data:/nvdli-nano/data \
     6    --device /dev/video0 \
     7    nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4" > docker_dli_run.sh
     8chmod +x docker_dli_run.sh
     9# run the script to activate the env
    10./docker_dli_run.sh
    

Use the JupyterLab from your laptop:

​ open 192.168.55.1:8888 in browser, pwd = dlinano

Camera setup

check available device

1!ls -ltrh /dev/video*

first method to use the camera

create camera object

1from jetcam.usb_camera import USBCamera
2
3#TODO change capture_device if incorrect for your system
4camera = USBCamera(width=224, height=224, capture_width=640, capture_height=480, capture_device=0)

capture a frame

1image = camera.read()
2print(image.shape)
3print(camera.value.shape)

create a widget to view

1import ipywidgets
2from IPython.display import display
3from jetcam.utils import bgr8_to_jpeg
4
5image_widget = ipywidgets.Image(format='jpeg')
6
7image_widget.value = bgr8_to_jpeg(image)
8
9display(image_widget)

use a function to update frame

1camera.running = True
2
3def update_image(change):
4    image = change['new']
5    image_widget.value = bgr8_to_jpeg(image)
6    
7camera.observe(update_image, names='value')

stop the stream

1camera.unobserve(update_image, names='value')

Second way to use the camera

1import traitlets
2#  traitlets dlink method to connect the camera to the widget, using a transform as one of the parameters. This eliminates some steps in the process.
3camera_link = traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
4# remove the link
5camera_link.unlink()
6# reconnect
7camera_link.link()

Release the camera

1import os
2os._exit(00)

GUI setting

Disabling the Desktop GUI

If you are running low on memory while training, you may want to try disabling the Ubuntu desktop GUI while you are training. This will free up extra memory that the window manager and desktop uses (around ~800MB for Unity/GNOME or ~250MB for LXDE)

You can disable the desktop temporarily, run commands in the console, and then re-start the desktop when you are done training:

1$ sudo init 3     # stop the desktop
2# log your user back into the console
3# run the PyTorch training scripts
4$ sudo init 5     # restart the desktop

If you wish to make this persistent across reboots, you can use the follow commands to change the boot-up behavior:

1$ sudo systemctl set-default multi-user.target     # disable desktop on boot
2$ sudo systemctl set-default graphical.target      # enable desktop on boot

ref:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md

ref:

https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-2gb-devkit#next

The Latest