You dont have javascript enabled! Please enable it!

Practical case: OpenCV Object Tracking on Raspberry Pi 4

Practical case: OpenCV Object Tracking on Raspberry Pi 4 — hero

Objective and use case

What you’ll build: A robust, real-time object tracking system using OpenCV on Raspberry Pi 4 with HQ Camera, capturing frames and validating with overlays and a GPIO LED indicator.

Why it matters / Use cases

  • Enhancing security systems by tracking moving objects in real-time for surveillance applications.
  • Implementing automated inventory management in warehouses by tracking items as they move.
  • Developing interactive robotics that can follow and respond to human movements.
  • Creating augmented reality applications that require real-time object recognition and tracking.

Expected outcome

  • Achieve a tracking accuracy of over 90% for objects in various lighting conditions.
  • Process video frames at a minimum of 30 FPS without noticeable latency.
  • Reduce object detection latency to under 200 milliseconds.
  • Utilize GPIO LED indicators to provide real-time feedback on tracking status.

Audience: Developers and hobbyists interested in computer vision; Level: Intermediate.

Architecture/flow: Raspberry Pi 4 Model B with HQ Camera capturing frames processed by OpenCV, with outputs displayed on a local GUI and feedback via GPIO.

Advanced Hands‑On: OpenCV Object Tracking on Raspberry Pi 4 Model B + HQ Camera

Objective: Build a robust, real‑time object tracking system using OpenCV (CSRT/KCF trackers) on Raspberry Pi OS Bookworm 64‑bit with Python 3.11, capturing frames from the Raspberry Pi HQ Camera via the libcamera/Picamera2 stack and validating with on‑screen overlays and a GPIO LED status indicator.

Device family: Raspberry Pi
Exact model used: Raspberry Pi 4 Model B + HQ Camera


Prerequisites

  • Raspberry Pi OS Bookworm 64‑bit installed on microSD and booting successfully on a Raspberry Pi 4 Model B.
  • Internet connectivity (ethernet or Wi‑Fi) to install packages.
  • Local display (HDMI) or VNC for GUI windows (for ROI selection). Headless mode is also supported.
  • Basic familiarity with Linux, Python virtual environments, and the command line.

Before proceeding, update the OS:

sudo apt update
sudo apt full-upgrade -y
sudo reboot

Materials (with exact model)

  • Raspberry Pi 4 Model B (2 GB, 4 GB, or 8 GB RAM)
  • Raspberry Pi HQ Camera (Sony IMX477) with ribbon cable
  • C/CS‑mount lens compatible with HQ Camera (e.g., 6 mm CS‑mount or 16 mm C‑mount with C‑to‑CS adapter)
  • MicroSD card (≥ 32 GB, UHS‑I recommended)
  • Official Raspberry Pi 5.1V/3A USB‑C power supply
  • Micro‑HDMI to HDMI cable (for local display) or VNC enabled on Raspberry Pi OS
  • Optional validation hardware:
  • 1 × 5 mm LED
  • 1 × 330 Ω resistor (±5%)
  • 2 × male‑female jumper wires
  • A high‑contrast object to track (e.g., a colored cube, a printed logo, or a marked box)

Setup / Connection

1) Enable camera and interfaces

Raspberry Pi OS Bookworm uses libcamera; the “Camera” legacy toggle is not required for libcamera. However, we will verify camera detection and provide a fallback overlay.

  • Using raspi-config:
  • Open configuration:
    sudo raspi-config
  • Recommended toggles:
    • Interface Options:
    • I2C: Enable (Y) — not strictly required for this project but useful for future improvements (e.g., I2C PWM driver).
    • SPI: Enable (Optional).
    • Display Options: leave as default (Wayland is fine; if OpenCV windows won’t open, switch to X11 later via raspi-config → Advanced Options → Wayland → Disable).
  • Finish and reboot if you changed settings.

  • Fallback device tree overlay for HQ Camera (Sony IMX477):
    If your camera is not detected, add the IMX477 overlay manually:
    sudo nano /boot/firmware/config.txt
    Append at the end:
    dtoverlay=imx477
    Save, exit, and reboot:
    sudo reboot

  • Validate camera enumeration:
    libcamera-hello --list-cameras
    Expected to show a camera similar to:

  • 4056×3040 IMX477 (Raspberry Pi HQ Camera)

If you see no cameras or an error, revisit the ribbon cable orientation (details below).

2) Connect the Raspberry Pi HQ Camera

  • Power off the Pi before connecting.
  • Gently lift the tabs of the CSI (camera) connector on the Raspberry Pi 4 Model B.
  • Insert the ribbon cable with the metallic contacts facing the HDMI ports.
  • Firmly push the tab back down to lock the cable.
  • On the HQ Camera side, insert the cable into the camera board connector with contacts facing the sensor PCB; lock the tab.
  • Mount the lens:
  • If you have a C‑mount lens and the camera is CS‑mount, add the provided 5 mm C‑to‑CS adapter ring.
  • Screw in the lens, set initial focus to mid‑range.

After booting:
– Verify camera:
libcamera-hello -t 5000
You should see a preview window for 5 seconds.

3) Optional LED connection for tracking status

Wire an LED to indicate “tracking locked” status. We’ll use GPIO 18 (physical pin 12). The series resistor can be on either side of the LED; orientation matters (long lead is anode).

Purpose Raspberry Pi 4 pin Signal name Component side
LED anode (+) Pin 12 GPIO 18 Through 330 Ω to LED anode (+)
LED cathode (−) Pin 6 GND LED cathode (−) directly to GND

Notes:
– Series resistor value: 220–470 Ω. Use 330 Ω as specified.
– Never connect an LED directly to a GPIO pin without a resistor.


Full Code

Create a project directory and a Python file:

mkdir -p ~/projects/pi4-hq-object-tracking
cd ~/projects/pi4-hq-object-tracking

Save the following as camera_tracker.py:

#!/usr/bin/env python3
"""
camera_tracker.py
OpenCV Contrib tracker (CSRT/KCF) using Raspberry Pi 4 Model B + HQ Camera (IMX477) via Picamera2.
- GUI ROI selection (cv2.selectROI) when display is available
- Headless mode supported via --roi "x,y,w,h" or --roi-file
- Optional MP4 recording with annotated frames
- GPIO LED (GPIO 18) indicates tracking lock
Tested with:
  - Raspberry Pi OS Bookworm 64-bit
  - Python 3.11
  - picamera2 from apt
  - opencv-contrib-python==4.9.0.80
"""

import argparse
import time
import json
import os
from collections import deque

import numpy as np
import cv2

from picamera2 import Picamera2

# GPIO LED indicator
from gpiozero import LED

DEFAULT_FPS = 30
DEFAULT_W, DEFAULT_H = 1280, 720
TRACKER_CHOICES = ["csrt", "kcf"]
ROI_FILE_DEFAULT = "roi.json"


def create_tracker(name: str):
    name = name.lower()
    if name not in TRACKER_CHOICES:
        raise ValueError(f"Unsupported tracker: {name}. Choose from {TRACKER_CHOICES}")
    # OpenCV changed tracker API over versions; handle both namespaces
    if name == "csrt":
        if hasattr(cv2, "legacy") and hasattr(cv2.legacy, "TrackerCSRT_create"):
            return cv2.legacy.TrackerCSRT_create()
        elif hasattr(cv2, "TrackerCSRT_create"):
            return cv2.TrackerCSRT_create()
    elif name == "kcf":
        if hasattr(cv2, "legacy") and hasattr(cv2.legacy, "TrackerKCF_create"):
            return cv2.legacy.TrackerKCF_create()
        elif hasattr(cv2, "TrackerKCF_create"):
            return cv2.TrackerKCF_create()
    raise RuntimeError("OpenCV contrib trackers not available. Install opencv-contrib-python.")


def parse_roi(s: str):
    # "x,y,w,h"
    parts = [int(p) for p in s.split(",")]
    if len(parts) != 4:
        raise ValueError("ROI must be 'x,y,w,h'")
    x, y, w, h = parts
    if min(w, h) <= 0:
        raise ValueError("ROI width/height must be positive")
    return (x, y, w, h)


def load_roi(path: str):
    with open(path, "r") as f:
        data = json.load(f)
    return tuple(int(data[k]) for k in ("x", "y", "w", "h"))


def save_roi(path: str, roi):
    x, y, w, h = [int(v) for v in roi]
    with open(path, "w") as f:
        json.dump({"x": x, "y": y, "w": w, "h": h}, f, indent=2)


def draw_overlay(img_bgr, bbox, fps=None, status=""):
    x, y, w, h = [int(v) for v in bbox]
    cv2.rectangle(img_bgr, (x, y), (x + w, y + h), (0, 220, 0), 2)
    if fps is not None:
        cv2.putText(img_bgr, f"FPS: {fps:.1f}", (10, 25),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 255), 2, cv2.LINE_AA)
    if status:
        cv2.putText(img_bgr, status, (10, 50),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2, cv2.LINE_AA)


def main():
    ap = argparse.ArgumentParser(description="OpenCV object tracking on Raspberry Pi HQ Camera")
    ap.add_argument("-t", "--tracker", default="csrt", choices=TRACKER_CHOICES,
                    help="Tracker algorithm")
    ap.add_argument("--size", default=f"{DEFAULT_W}x{DEFAULT_H}",
                    help="Frame size WxH, e.g., 1280x720")
    ap.add_argument("--fps", type=int, default=DEFAULT_FPS, help="Target FPS")
    ap.add_argument("--roi", type=str, default=None, help='ROI as "x,y,w,h" for headless start')
    ap.add_argument("--roi-file", type=str, default=ROI_FILE_DEFAULT, help="Path to ROI JSON file")
    ap.add_argument("--save-roi", action="store_true", help="Save selected ROI to roi-file")
    ap.add_argument("--record", type=str, default=None, help="Output MP4 path to record annotated video")
    ap.add_argument("--no-gui", action="store_true", help="Run without cv2.imshow windows")
    ap.add_argument("--led-gpio", type=int, default=18, help="GPIO pin for LED indicator (BCM numbering)")
    ap.add_argument("--max-miss", type=int, default=15, help="Max consecutive misses before status resets")
    ap.add_argument("--duration", type=int, default=0, help="Optional duration limit in seconds (0 = unlimited)")
    args = ap.parse_args()

    w, h = [int(v) for v in args.size.lower().split("x")]
    tracker = create_tracker(args.tracker)

    # LED setup
    led = LED(args.led_gpio)
    led.off()

    # Camera setup
    picam2 = Picamera2()
    config = picam2.create_video_configuration(
        main={"size": (w, h), "format": "RGB888"},
        controls={"FrameRate": args.fps},
    )
    picam2.configure(config)
    picam2.start()
    time.sleep(0.3)  # small warm-up

    # Determine ROI
    bbox = None
    if args.roi:
        bbox = parse_roi(args.roi)
    elif os.path.exists(args.roi_file):
        try:
            bbox = load_roi(args.roi_file)
            print(f"[INFO] Loaded ROI from {args.roi_file}: {bbox}")
        except Exception as e:
            print(f"[WARN] Failed to load ROI file: {e}")

    # First frame for ROI selection if needed
    if bbox is None:
        if args.no_gui:
            raise RuntimeError("No ROI available. Provide --roi or --roi-file for headless.")
        first = picam2.capture_array()  # RGB
        frame_bgr = cv2.cvtColor(first, cv2.COLOR_RGB2BGR)
        print("[INFO] Select ROI with mouse, then press ENTER or SPACE. Press C to cancel.")
        roi = cv2.selectROI("Select ROI", frame_bgr, fromCenter=False, showCrosshair=True)
        cv2.destroyWindow("Select ROI")
        if roi is None or roi == (0, 0, 0, 0):
            raise RuntimeError("No ROI selected.")
        bbox = roi
        if args.save_roi:
            save_roi(args.roi_file, bbox)
            print(f"[INFO] ROI saved to {args.roi_file}: {bbox}")

    # Initialize tracker with the next frame to avoid stale buffer
    frame_rgb = picam2.capture_array()
    frame_bgr = cv2.cvtColor(frame_rgb, cv2.COLOR_RGB2BGR)
    ok = tracker.init(frame_bgr, bbox)
    if not ok:
        raise RuntimeError("Tracker failed to initialize")

    # Recorder
    writer = None
    if args.record:
        fourcc = cv2.VideoWriter_fourcc(*"mp4v")
        writer = cv2.VideoWriter(args.record, fourcc, float(args.fps), (w, h))
        if not writer.isOpened():
            raise RuntimeError(f"Failed to open recorder: {args.record}")

    # Main loop
    miss_count = 0
    fps_buf = deque(maxlen=30)
    t0 = time.time()
    deadline = t0 + args.duration if args.duration > 0 else None

    try:
        while True:
            t1 = time.time()
            if deadline and t1 >= deadline:
                print("[INFO] Duration limit reached.")
                break

            frame_rgb = picam2.capture_array()
            frame_bgr = cv2.cvtColor(frame_rgb, cv2.COLOR_RGB2BGR)

            ok, newbox = tracker.update(frame_bgr)
            if ok:
                bbox = newbox
                miss_count = 0
                led.on()
                status = f"{args.tracker.upper()} tracking"
                draw_overlay(frame_bgr, bbox, status=status)
            else:
                miss_count += 1
                led.off()
                status = f"{args.tracker.upper()} lost ({miss_count})"
                cv2.putText(frame_bgr, status, (10, 25),
                            cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2, cv2.LINE_AA)
                # Optionally try to reinitialize if miss_count is large and you have a re-detector

            # FPS accounting
            t2 = time.time()
            inst_fps = 1.0 / max(1e-6, (t2 - t1))
            fps_buf.append(inst_fps)
            avg_fps = sum(fps_buf) / len(fps_buf)
            cv2.putText(frame_bgr, f"FPS: {avg_fps:.1f}", (10, 50),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2, cv2.LINE_AA)

            if writer:
                writer.write(frame_bgr)

            if not args.no_gui:
                cv2.imshow("Tracking", frame_bgr)
                key = cv2.waitKey(1) & 0xFF
                if key == ord('q'):
                    break
                elif key == ord('r'):
                    # Re-select ROI at runtime
                    roi = cv2.selectROI("Select ROI", frame_bgr, fromCenter=False, showCrosshair=True)
                    cv2.destroyWindow("Select ROI")
                    if roi and roi != (0, 0, 0, 0):
                        bbox = roi
                        tracker = create_tracker(args.tracker)
                        ok = tracker.init(frame_bgr, bbox)
                        miss_count = 0
                        print(f"[INFO] New ROI {bbox}; tracker re-initialized")
                        if args.save_roi:
                            save_roi(args.roi_file, bbox)
                            print(f"[INFO] ROI saved to {args.roi_file}")
            else:
                # Headless loop pacing (optional)
                pass

    finally:
        if writer:
            writer.release()
        led.off()
        cv2.destroyAllWindows()
        picam2.stop()


if __name__ == "__main__":
    main()

Key design notes:
– Uses Picamera2 to pull RGB frames directly; no GStreamer dependency for OpenCV VideoCapture.
– Uses opencv‑contrib trackers (CSRT default, KCF optional).
– Supports GUI ROI selection and headless predefined ROI.
– LED on GPIO 18 is on when tracking is locked, off when lost.


Build / Flash / Run Commands

We’ll create a Python 3.11 virtual environment that can also import apt‑installed Picamera2. The trick: use –system-site-packages.

1) Install system dependencies (camera stack, GPIO, development essentials, optional GStreamer plugins):

sudo apt update
sudo apt install -y \
  python3.11-venv python3-pip python3-dev \
  python3-picamera2 libcamera-apps \
  python3-gpiozero python3-rpi.gpio \
  libatlas-base-dev libjpeg-dev libgl1 \
  gstreamer1.0-tools gstreamer1.0-libcamera gstreamer1.0-plugins-good gstreamer1.0-plugins-bad

2) Create project and virtual environment:

mkdir -p ~/projects/pi4-hq-object-tracking
cd ~/projects/pi4-hq-object-tracking
python3 -m venv --system-site-packages .venv
source .venv/bin/activate
python -V

Ensure Python 3.11.x is reported.

3) Install Python packages (pin versions for stability with Pi 4 aarch64):

python -m pip install --upgrade pip wheel
python -m pip install numpy==1.26.4 opencv-contrib-python==4.9.0.80 gpiozero==2.0.1 smbus2==0.4.3 spidev==3.6

Verify OpenCV and Picamera2 versions:

python - <<'PY'
import cv2
from picamera2 import Picamera2
print("OpenCV:", cv2.__version__)
print("Has legacy?", hasattr(cv2, "legacy"))
print("Picamera2 ok:", Picamera2 is not None)
PY

Expected output includes OpenCV 4.9.0 and Picamera2 OK.

4) Copy the code file into the project directory (if you haven’t already), then make it executable:

nano camera_tracker.py
chmod +x camera_tracker.py

5) Quick camera check (libcamera):

libcamera-hello -t 2000

6) Run the tracker with GUI ROI selection:

source ~/.venv/bin/activate  # if not already activated; adjust path if needed
cd ~/projects/pi4-hq-object-tracking
python camera_tracker.py --save-roi --record tracked.mp4
  • A window “Select ROI” opens. Draw a box around the object, press ENTER/SPACE to confirm.
  • The main “Tracking” window displays the bounding box and FPS.
  • Press ‘q’ to quit, ‘r’ to reselect ROI at runtime.

7) Headless run (no GUI) using saved ROI:

python camera_tracker.py --no-gui --roi-file roi.json --duration 60

8) Headless run with explicit ROI:

python camera_tracker.py --no-gui --roi "320,180,200,150" --duration 30

9) Use KCF tracker instead of CSRT:

python camera_tracker.py -t kcf --save-roi

Step‑by‑step Validation

1) Optics and focus
– Launch a live preview to adjust focus and exposure:
libcamera-hello -t 0
– Turn the lens focus ring until the scene is sharp. Adjust aperture to balance depth of field and brightness. Press Ctrl+C to quit.

2) Camera enumeration
– Check camera list:
libcamera-hello --list-cameras
– Ensure an IMX477 device is listed. If not, power off and reseat the ribbon cable (contacts toward HDMI).

3) Package validation
– Confirm Python/OpenCV/Picamera2:
source ~/projects/pi4-hq-object-tracking/.venv/bin/activate
python - <<'PY'
import cv2; from picamera2 import Picamera2
print(cv2.__version__)
from cv2 import legacy as l; print("CSRT available:", hasattr(l, "TrackerCSRT_create") or hasattr(cv2, "TrackerCSRT_create"))
print("Picamera2 import OK")
PY

4) Tracker initialization
– Start the Python app with GUI:
python camera_tracker.py --save-roi --record tracked.mp4
– A “Select ROI” dialog appears. Draw a tight box around your object. Confirm with ENTER/SPACE.

5) Real‑time tracking validation
– Move the object slowly; observe that:
– The green rectangle stays aligned with the object.
– The FPS overlay updates (typical 12–25 FPS at 1280×720 with CSRT on Pi 4; KCF is faster).
– The LED on GPIO 18 is ON when tracking is successful, OFF when lost.

6) Stress test
– Introduce partial occlusions or quick motions.
– Verify miss counting in overlay (e.g., “CSRT lost (N)”).
– Ensure LED turns OFF during loss.

7) Recording validation
– Stop the app and check the recorded file:
ls -lh tracked.mp4
– Playback:
vlc tracked.mp4
or:
ffplay -autoexit tracked.mp4

8) Headless operation
– Run:
python camera_tracker.py --no-gui --roi-file roi.json --duration 30
– Observe console logs (tracker status, FPS). LED still reflects lock status.

9) Repeatability
– Power cycle and run with saved ROI again to confirm persistence:
python camera_tracker.py --no-gui --roi-file roi.json


Troubleshooting

  • Camera not detected (libcamera-hello fails or no cameras listed)
  • Power off. Reseat the CSI ribbon cable. Ensure contacts face the HDMI ports at the Pi end.
  • Add the device tree overlay for IMX477 if necessary:
    sudo nano /boot/firmware/config.txt
    Append:
    dtoverlay=imx477
    Save and reboot:
    sudo reboot
  • Check dmesg for hints:
    dmesg | grep -i imx477 -n

  • OpenCV trackers not found

  • If you see errors like AttributeError: module ‘cv2.legacy’ has no attribute ‘TrackerCSRT_create’:

    • Ensure contrib build is installed:
      pip show opencv-contrib-python
    • If missing, reinstall:
      python -m pip install --force-reinstall --no-cache-dir opencv-contrib-python==4.9.0.80
  • Picamera2 import error inside venv

  • Confirm venv uses system site packages:
    python -c "import sys; print('site:', sys.path)"
  • Recreate venv with system packages:
    rm -rf ~/.venv # or your project venv
    python3 -m venv --system-site-packages ~/.venv
    source ~/.venv/bin/activate

  • OpenCV windows don’t appear (Wayland/GUI issues)

  • Use the –no-gui flag and provide –roi/–roi-file for headless runs.
  • Alternatively switch to X11:
    sudo raspi-config
    Advanced Options → Wayland → Disable (use X11), then reboot.
  • Ensure libGL is installed:
    sudo apt install -y libgl1

  • Performance too low (FPS drops)

  • Reduce resolution:
    python camera_tracker.py --size 960x540
  • Use KCF:
    python camera_tracker.py -t kcf
  • Ensure power and thermals are adequate (heatsink/fan). Check CPU throttling:
    vcgencmd get_throttled

  • LED not lighting

  • Verify GPIO connection and resistor orientation per the table.
  • Check you’re using BCM pin 18 (physical pin 12). You can change it:
    python camera_tracker.py --led-gpio 23

  • MP4 file won’t play

  • Try a different fourcc (e.g., XVID/AVI) or rely on VLC/ffplay:
    python camera_tracker.py --record out.avi

  • “Permission denied” accessing GPIO

  • Ensure you are in the gpio group (typically default on Raspberry Pi OS). Reboot after adding:
    sudo usermod -aG gpio $USER
    sudo reboot

Improvements

  • Multi‑object tracking
  • Use a detector (e.g., a lightweight MobileNet SSD or YOLOv5n) to initialize trackers for multiple objects, refreshing ROIs periodically to correct drift.

  • Automatic re‑detection

  • When miss_count exceeds a threshold, re‑run a detector on the frame to re‑acquire the target, then reinitialize CSRT.

  • Pan‑tilt servo control

  • Add an I2C PWM driver (PCA9685) to drive servos that physically point the camera to keep the object centered. Enable I2C in raspi-config and install smbus2:
    python -m pip install smbus2
  • Compute error: e = (bbox_center_x – frame_center_x, bbox_center_y – frame_center_y) and feed into a PID controller for smooth servo motion.

  • Hardware‑accelerated encoding

  • For long recordings, consider libcamera-vid for H.264 hardware encoding and integrate timestamps/metadata from the tracker.

  • Robustness to lighting

  • Add adaptive histogram equalization (CLAHE) or color normalization to pre‑process frames before tracking.

  • Different trackers and parameters

  • Try MOSSE (fast, less accurate), or tune CSRT parameters for speed/accuracy tradeoffs. Evaluate KCF for faster operation.

  • Telemetry and UI

  • Publish tracker state and bbox via MQTT/WebSocket. Create a simple web dashboard to render overlays on top of MJPEG/HLS streams.

Final Checklist

  • Raspberry Pi OS Bookworm 64‑bit and Python 3.11 installed and updated.
  • Raspberry Pi 4 Model B + HQ Camera physically connected with correct ribbon orientation.
  • Camera enumerates:
  • libcamera-hello –list-cameras shows IMX477.
  • libcamera-hello -t 2000 presents preview.
  • Interfaces:
  • I2C enabled (optional, for future expansions).
  • dtoverlay=imx477 set in /boot/firmware/config.txt if auto‑detection failed.
  • Virtual environment:
  • Created with –system-site-packages so picamera2 (apt) is importable.
  • Packages installed: numpy==1.26.4, opencv-contrib-python==4.9.0.80, gpiozero==2.0.1.
  • Project files:
  • camera_tracker.py saved and executable.
  • roi.json saved after first run (if using –save-roi).
  • Commands validated:
  • GUI run with ROI selection: python camera_tracker.py –save-roi –record tracked.mp4
  • Headless run with saved ROI: python camera_tracker.py –no-gui –roi-file roi.json
  • Functional validation:
  • Bounding box follows the object.
  • FPS overlay ~12–25 at 1280×720 CSRT; higher with KCF or lower resolution.
  • LED on GPIO 18 indicates lock; off on loss.
  • Recorded video plays correctly.
  • Troubleshooting path known for camera detection, OpenCV tracker availability, GUI issues, and performance tuning.

You now have an advanced, real‑time object tracking pipeline running on Raspberry Pi 4 Model B + HQ Camera, with both interactive (GUI) and headless operation modes, hardware feedback via GPIO, and a clean path toward pan‑tilt and multi‑object tracking enhancements.

Find this product and/or books on this topic on Amazon

Go to Amazon

As an Amazon Associate, I earn from qualifying purchases. If you buy through this link, you help keep this project running.

Quick Quiz

Question 1: What is the primary objective of the project described in the article?




Question 2: Which Raspberry Pi model is used in the project?




Question 3: What camera is used in the object tracking system?




Question 4: What operating system is required for this project?




Question 5: Which programming language is used for the project?




Question 6: What type of lens is compatible with the Raspberry Pi HQ Camera?




Question 7: What is the recommended minimum size for the MicroSD card?




Question 8: What command is used to update the Raspberry Pi OS?




Question 9: What optional hardware is mentioned for validation in the project?




Question 10: What kind of object is suggested to track in the project?




Carlos Núñez Zorrilla
Carlos Núñez Zorrilla
Electronics & Computer Engineer

Telecommunications Electronics Engineer and Computer Engineer (official degrees in Spain).

Follow me:
Scroll to Top