Reachy Mini Voice Robot - 1

Reachy Mini Voice Robot

Turn Reachy Mini into a voice companion — it listens, thinks, speaks, and expresses emotions in under 1 second, all running locally on Jetson.

中級30min音声 AI
音声Jetsonrobotreachyollamalocal

What This Solution Does

Turn your Reachy Mini desktop robot into a real-time voice companion. The robot listens to its surroundings, thinks with a local AI model, speaks back, and expresses emotions through head movements and antenna poses — all with under 1 second end-to-end latency, running entirely on your Jetson device.

Core Value

ValueDescription
Real-time ConversationUnder 1 second from hearing your voice to speaking back — fast enough for natural dialogue
Emotional Expressions14 distinct emotions (happy, curious, surprised, etc.) shown through head movements and antenna poses
Fully LocalEverything runs on your Jetson — no cloud, no subscription, no internet required after setup
Monologue ModeRobot automatically generates "inner thoughts" for exhibition/demo scenarios without any user interaction

Use Cases

ScenarioDescription
Exhibition & Trade ShowsPlace the robot at your booth — it talks to itself, reacts to visitors, and draws attention
Retail & ReceptionGreet customers with natural conversation and emotional expressions
Education & ResearchStudy human-robot interaction with a fully customizable local AI pipeline
Office CompanionA desk robot that mutters observations and responds when spoken to

What You Need

Hardware:

DevicePurpose
Reachy Mini (by Pollen Robotics)Desktop robot with arms, head, antennas, and camera
NVIDIA Jetson Orin NX 16GBRuns all AI services — conversation, speech, and robot control
USB cableConnects Reachy Mini to Jetson

Software prerequisites:

PrerequisiteHow to Get It
JetPack 6.x on JetsonPre-installed on reComputer Jetson
Docker with NVIDIA runtimePre-installed on JetPack 6.x
Speech service (port 8621)Deploy the Jetson Local Voice Assistant solution first

Network: Internet required during deployment to download services (~5 GB) and AI model (~1.5 GB). After setup, runs fully offline.

ご利用要件

audio

Microphone input for voice interaction

デプロイ構成

ダウンロードとインストール

Preset: Deploy Reachy Voice Robot {#default}

Deploy the full voice conversation stack on your Jetson device. The robot will listen, think with a local AI model, speak, and express emotions — all under 1 second latency.

DevicePurpose
NVIDIA Jetson Orin NX 16GBRuns AI conversation, speech processing, and robot control
Reachy MiniDesktop robot with arms, head, antennas, and camera

What gets deployed:

  • Robot Control — motor, camera, and sensor management
  • Conversation Engine — AI dialogue + emotion system + web dashboard
  • Vision Analysis — face detection, emotion recognition, and person tracking (GPU-accelerated)
  • Local AI Model — powers the robot's thinking ability (auto-installed if not present)

Prerequisites:

  • Reachy Mini connected to Jetson via USB
  • Jetson with JetPack 6.x, SSH access, and internet

Step 1: Deploy Speech Service {#speech_service type=docker_deploy required=true config=devices/speech.yaml}

Deploy the GPU-accelerated speech recognition (ASR) and voice synthesis (TTS) service. The pre-built image includes all dependencies and models — just pull and run.

Target: Remote Deployment {#speech_remote type=remote config=devices/speech.yaml default=true}

Deploy to your Jetson over SSH with one click.

Wiring

  1. Connect your Jetson to the network
  2. Enter the Jetson's IP address and SSH credentials
  3. Click Deploy — the system will pull the pre-built image and start the service automatically

Deployment Complete

Speech service is running at http://<jetson-ip>:8621. Quick test:

# Check service health
curl http://<jetson-ip>:8621/health
# Expected: {"asr": true, "tts": true, "streaming_asr": true}

Troubleshooting

IssueSolution
SSH connection failedVerify the IP address and credentials. Try ssh username@ip from your computer first
Image pull slowThe image is ~8GB compressed. Ensure stable internet on the Jetson
Service not startingCheck logs: ssh user@ip "cd jetson-voice && docker compose logs"
Health check failsFirst startup takes ~40 seconds for model warmup. Wait and retry

Target: Local Deployment {#speech_local type=local config=devices/speech_local.yaml}

Deploy directly on the current machine (requires NVIDIA GPU).

Wiring

  1. Ensure Docker and NVIDIA Container Toolkit are installed
  2. Click Deploy to start installation

Note: First startup may take 10-15 minutes for Docker image download and model initialization.

Deployment Complete

Speech service is running at http://localhost:8621. Quick test:

# Check service health
curl http://localhost:8621/health
# Expected: {"asr": true, "tts": true, "streaming_asr": true}

Troubleshooting

IssueSolution
NVIDIA runtime not foundInstall NVIDIA Container Toolkit: sudo apt install nvidia-container-toolkit && sudo systemctl restart docker
Port 8621 already in useStop existing services on port 8621
Container keeps restartingCheck logs: docker logs jetson-voice-speech-1
Health check failsFirst startup takes ~40 seconds for model warmup. Wait and retry

Step 2: Deploy Reachy Voice Robot {#reachy_deploy type=docker_deploy required=true config=devices/reachy.yaml}

Deploy the robot control, conversation, and vision services to your Jetson. The system will automatically install Ollama and pull the AI model if not present.

Target: Remote Deployment {#reachy_remote type=remote config=devices/reachy.yaml default=true}

Deploy to your Jetson over SSH with one click.

Wiring

  1. Connect Reachy Mini to Jetson via USB cable
  2. Ensure the Jetson is on the network and SSH is accessible
  3. Enter the Jetson's IP address and SSH credentials
  4. Configure the data directory (default: ~/reachy-data) for captures and face database
  5. Choose the Vision Backend:
    • Local (default) — runs vision-trt on this Jetson. Needs the USB camera attached and builds TensorRT engines on first boot (~3-5 min).
    • Cloud — skips the local vision-trt container and points the robot at a remote vision service. Fill in the Vision Service URL (e.g. tcp://192.168.1.50:8631) when selecting this option.
  6. Optionally enable Kiosk Mode to auto-launch the dashboard fullscreen on boot
  7. Click Deploy — the system will:
    • Install Ollama and pull the AI model if not present (~1.5 GB)
    • Pull and start robot control, conversation, and (for Local backend) vision services

Deployment Complete

The robot should start talking within 30 seconds after deployment. Open the dashboard to monitor activity:

http://<jetson-ip>:8640

Default mode: Monologue — the robot automatically generates "inner thoughts" every 5 seconds. No user interaction needed.

To check all services are running:

ssh user@<jetson-ip> "docker ps --format 'table {{.Names}}\t{{.Status}}'"

Troubleshooting

IssueSolution
Model download slowThe AI model is ~1.5 GB. Ensure stable internet. Progress shows in deployment logs
Slow reply (>10 s)Ollama fell back to CPU. GPU is auto-configured by the deployer; if the issue persists: sudo systemctl restart ollama
Robot not movingCheck USB connection. Try replugging the USB cable and restart: docker restart reachy-daemon
No audio outputVerify Reachy Mini's built-in speaker is working. Check audio.device in config
Dashboard not loadingWait 30 seconds for startup. Check: curl http://<jetson-ip>:8640/health
No camera feedVision service builds TRT engines on first boot (~5 min). Check: docker logs vision-trt
Camera not found on bootUSB camera takes 15-30s to enumerate. Vision service retries automatically (~90s)
Camera drops after hoursUSB power-management regression. A udev rule disabling autosuspend is installed by the deployer; if it recurs, physically replug the Reachy USB cable

Target: Local Deployment {#reachy_local type=local config=devices/reachy_local.yaml}

Deploy directly on the current machine (requires NVIDIA Jetson with Reachy Mini connected).

Wiring

  1. Connect Reachy Mini to the machine via USB cable
  2. Ensure Docker and NVIDIA Container Toolkit are installed
  3. Click Deploy to start installation

Note: First startup may take 5-10 minutes for Docker image download and model initialization.

Deployment Complete

The robot should start talking within 30 seconds after deployment. Open the dashboard to monitor activity:

http://localhost:8640

Troubleshooting

IssueSolution
NVIDIA runtime not foundInstall NVIDIA Container Toolkit: sudo apt install nvidia-container-toolkit && sudo systemctl restart docker
Robot not movingCheck USB connection. Try replugging the USB cable and restart: docker restart reachy-daemon
Dashboard not loadingWait 30 seconds for startup. Check: curl http://localhost:8640/health
No camera feedVision service builds TRT engines on first boot (~5 min). Check: docker logs vision-trt

Deployment Complete

Your Reachy Mini voice robot is now running!

What's Happening

The robot is in Monologue Mode — it automatically generates thoughts and speaks them out loud every few seconds, with matching head movements and antenna expressions. This is ideal for exhibitions and demos.

Service Overview

ServicePortPurpose
Robot Control38001Motor, camera, and sensor management
Conversation Engine8640AI dialogue + emotion system + dashboard
Vision Analysis8630Face detection, emotion recognition, person tracking
Local AI Model11434Powers the robot's thinking ability
Speech Service8621Listens and speaks (deployed in Step 1)

Next Steps

  • Open the Dashboard at http://<jetson-ip>:8640 to see conversation logs and robot status
  • To switch from monologue to interactive mode, edit the config on the Jetson:
    ssh user@<jetson-ip>
    nano ~/reachy-deploy/reachy-claw.jetson.yaml
    # Change: conversation.mode: conversation
    # Change: barge_in.enabled: true
    docker restart reachy-claw
    
  • To change the AI model: ollama pull <model-name>, then update llm.model in the config and docker restart reachy-claw
お問い合わせ
ハードウェアパートナーとしてうれしいです!
これまで当社製品を使用したことがありますか?
Reachy Mini Voice Robot