Practical case: Zone-based people detection using OpenCV

Practical case: Zone-based people detection using OpenCV — hero

Objective and use case

What you’ll build: A real-time person detection application on Raspberry Pi 4 using HQ Camera and Google Coral USB for zone tracking.

Why it matters / Use cases

  • Enhancing security in public spaces by monitoring entry and exit points in real-time.
  • Automating attendance tracking in events or workplaces by logging person movements across defined zones.
  • Improving customer experience in retail by analyzing foot traffic patterns in different store areas.
  • Facilitating research in smart environments by collecting data on human interactions within designated zones.

Expected outcome

  • Real-time detection of persons with a latency of less than 200ms.
  • Accurate logging of “enter” and “exit” events with at least 95% precision.
  • Zone tracking capability for up to 10 configurable zones simultaneously.
  • Frame processing rate of 15 FPS using the Google Coral USB Accelerator.

Audience: Developers and engineers interested in computer vision; Level: Advanced

Architecture/flow: Raspberry Pi 4 captures frames via HQ Camera, processes detections on Google Coral USB, and uses OpenCV for visualization and logging.

Prerequisites

  • Target device family: Raspberry Pi
  • Exact model for this project: Raspberry Pi 4 Model B + Raspberry Pi HQ Camera (IMX477) + Google Coral USB (Edge TPU)
  • OS and language: Raspberry Pi OS Bookworm 64-bit, Python 3.11
  • Objective: Build a real-time “opencv-coral-person-detection-zones” application that:
  • Uses the Raspberry Pi HQ Camera to capture frames.
  • Runs person detection on the Google Coral USB Accelerator using TensorFlow Lite models optimized for the Edge TPU.
  • Uses OpenCV to draw zone polygons and annotate detections.
  • Logs “enter” and “exit” events when tracked persons move between configurable zones.
  • Minimum hardware power: Official Raspberry Pi USB‑C 5V/3A power supply recommended.

Knowledge expectations (Advanced):
– Comfortable with Linux shell, Python virtual environments, and basic OpenCV.
– Familiarity with libcamera/Picamera2 on Raspberry Pi OS Bookworm.
– Understanding of object detection and simple tracking concepts.

Before starting:
– Ensure a fresh Raspberry Pi OS Bookworm 64-bit installation (2023-10 or newer recommended).
– Connect the Pi to the internet via Ethernet or Wi‑Fi.
– Have a screen/keyboard/mouse or SSH access.

Materials (with exact model)

Item Exact Model / Part Qty Notes
Raspberry Pi board Raspberry Pi 4 Model B (2 GB+ RAM recommended) 1 Use the blue USB 3.0 ports for Coral.
Camera Raspberry Pi HQ Camera (IMX477) 1 Requires C/CS-mount lens.
Lens 6mm/8mm/12mm C/CS lens (choose field of view) 1 Any supported lens for HQ Camera; match your scene width.
Accelerator Google Coral USB Accelerator (Edge TPU) 1 USB 3.0 preferred.
Storage microSD card (32 GB) 1 Raspberry Pi OS Bookworm 64‑bit.
Power supply Official Raspberry Pi USB‑C 5V 3A 1 Stable power is critical for Coral reliability.
Cables Camera ribbon cable (HQ Camera), USB-A to Coral 1 each Use the included ribbon; connect Coral to a blue USB 3.0 port.

Setup/Connection

1) Physical connections

  • Power off the Raspberry Pi.
  • Attach the Raspberry Pi HQ Camera:
  • Lift the black latch on the CSI camera connector labeled “CAMERA”.
  • Insert the ribbon cable with the metal contacts facing the HDMI ports.
  • Push the latch down to lock.
  • Screw in the lens on the HQ Camera; set focus to mid-range.
  • Connect the Coral USB Accelerator to one of the blue USB 3.0 ports on the Pi.
  • Insert the microSD card flashed with Raspberry Pi OS Bookworm 64‑bit.
  • Power on the Raspberry Pi.

2) Enable camera interface (Bookworm/libcamera)

On Raspberry Pi OS Bookworm, cameras use the libcamera stack (no legacy “raspistill”). Enable the interface:

Option A: raspi-config
– Run:
sudo raspi-config
– Interface Options -> Camera -> Enable
– Finish and reboot if prompted.

Option B: Edit /boot/firmware/config.txt
– Open:
sudo nano /boot/firmware/config.txt
– Ensure the following lines exist (add if missing):
camera_auto_detect=1
dtoverlay=imx477

– Save and reboot:
sudo reboot

After reboot, test the camera:

libcamera-hello -t 5000

You should see a 5-second preview. If not, see Troubleshooting.

3) Enable USB and check Coral enumeration

  • Check that the Coral is detected:
    lsusb | grep -i google
    Expected output contains something like “Google Inc.” and “Accelerator”. If missing, try a different USB 3.0 port (blue) and ensure the power supply is adequate.

4) System updates and developer tools

sudo apt update && sudo apt full-upgrade -y
sudo apt install -y git python3-venv python3-pip python3-libcamera python3-picamera2 \
  libatlas-base-dev libopenjp2-7 libtiff5 libilmbase25 libhdf5-103-1 libgtk-3-0 \
  pkg-config cmake curl wget

5) Install Coral Edge TPU runtime (APT)

Add Google Coral APT repo and install the standard runtime:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/coral-edgetpu.gpg
echo "deb [signed-by=/usr/share/keyrings/coral-edgetpu.gpg] https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
sudo apt update
sudo apt install -y libedgetpu1-std

Note: Use libedgetpu1-max only if you have reliable power and adequate cooling. For this tutorial we use libedgetpu1-std.

6) Create Python 3.11 virtual environment

We will include system packages so we can import Picamera2 from apt inside the venv.

python3 --version
mkdir -p ~/opencv-coral-zones
cd ~/opencv-coral-zones
python3 -m venv --system-site-packages .venv
source .venv/bin/activate
python -m pip install --upgrade pip wheel

7) Install Python packages (pip)

Install OpenCV (with GUI), Coral Python APIs, and utilities. We also install gpiozero, smbus2, and spidev to align with family defaults, though they are not used in this project.

pip install opencv-python==4.8.1.78 numpy==1.26.4
pip install tflite-runtime==2.12.0
pip install pycoral==2.0.0
pip install gpiozero==1.6.2 smbus2==0.5.1 spidev==3.6

If OpenCV GUI windows fail in your environment, you can alternatively install headless:

pip uninstall -y opencv-python
pip install opencv-python-headless==4.8.1.78

8) Download detection model and labels

We use the Edge TPU-compiled SSD MobileNet v2 COCO model and labels. Store under a models directory.

mkdir -p ~/opencv-coral-zones/models
cd ~/opencv-coral-zones/models
wget https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
wget https://github.com/google-coral/edgetpu/raw/master/test_data/coco_labels.txt

9) Create a zones configuration file

We define zones in pixel coordinates relative to your chosen preview resolution (e.g., 1280×720). You can adjust later.

cd ~/opencv-coral-zones
cat > zones.json << 'EOF'
{
  "frame_size": [1280, 720],
  "zones": [
    {
      "id": "A",
      "name": "Entrance",
      "polygon": [[60, 680], [500, 680], [500, 360], [60, 360]],
      "color": [0, 255, 0]
    },
    {
      "id": "B",
      "name": "Counter",
      "polygon": [[800, 700], [1260, 700], [1260, 380], [800, 380]],
      "color": [255, 0, 0]
    }
  ]
}
EOF

Full Code

Create the main application at ~/opencv-coral-zones/app.py

#!/usr/bin/env python3
import argparse
import json
import os
import sys
import time
from collections import deque

import cv2
import numpy as np

from pycoral.adapters import common, detect
from pycoral.utils.edgetpu import make_interpreter

# Picamera2 is installed via apt and visible due to --system-site-packages
from picamera2 import Picamera2, Preview


def load_labels(path):
    labels = {}
    with open(path, 'r') as f:
        for line in f:
            pair = line.strip().split(maxsplit=1)
            if len(pair) == 2:
                labels[int(pair[0])] = pair[1].strip()
    return labels


def point_in_polygon(point, polygon):
    # Ray casting algorithm for point-in-polygon
    x, y = point
    inside = False
    n = len(polygon)
    for i in range(n):
        x1, y1 = polygon[i]
        x2, y2 = polygon[(i + 1) % n]
        cond = ((y1 > y) != (y2 > y)) and \
               (x < (x2 - x1) * (y - y1) / (y2 - y1 + 1e-12) + x1)
        if cond:
            inside = not inside
    return inside


def scale_polygon(poly, src_size, dst_size):
    sx = dst_size[0] / src_size[0]
    sy = dst_size[1] / src_size[1]
    return [(int(x * sx), int(y * sy)) for (x, y) in poly]


class CentroidTracker:
    def __init__(self, max_distance=80, max_missed=12):
        self.next_id = 1
        self.tracks = {}  # id -> dict: centroid, zone_id, missed, trace
        self.max_distance = max_distance
        self.max_missed = max_missed

    def _euclidean(self, a, b):
        return np.linalg.norm(np.array(a, dtype=float) - np.array(b, dtype=float))

    def update(self, detections, zones, frame_index, on_event):
        """
        detections: list of (cx, cy, bbox) for persons
        zones: list of dict with keys {id, polygon}
        on_event: callback(event_type:str, track_id:int, from_zone:str|None, to_zone:str|None, timestamp:float)
        """
        # Build arrays
        det_centroids = [(d[0], d[1]) for d in detections]
        det_assigned = [False] * len(detections)

        # Step 1: Match existing tracks to detections by nearest centroid
        for tid, t in list(self.tracks.items()):
            # Find nearest detection
            min_d = 1e9
            min_j = -1
            for j, c in enumerate(det_centroids):
                if det_assigned[j]:
                    continue
                d = self._euclidean(t['centroid'], c)
                if d < min_d:
                    min_d = d
                    min_j = j
            if min_j >= 0 and min_d <= self.max_distance:
                # Update track with detection
                t['centroid'] = det_centroids[min_j]
                t['missed'] = 0
                det_assigned[min_j] = True
                # Check zone transition
                new_zone = None
                for z in zones:
                    if point_in_polygon(t['centroid'], z['polygon']):
                        new_zone = z['id']
                        break
                if new_zone != t['zone_id']:
                    on_event('exit', tid, t['zone_id'], None, time.time()) if t['zone_id'] else None
                    on_event('enter', tid, None, new_zone, time.time()) if new_zone else None
                    t['zone_id'] = new_zone
                # Trace for drawing
                t['trace'].append(t['centroid'])
                if len(t['trace']) > 15:
                    t['trace'].popleft()
            else:
                # No match, increment missed
                t['missed'] += 1
                if t['missed'] > self.max_missed:
                    # If leaving with zone, signal exit
                    if t['zone_id']:
                        on_event('exit', tid, t['zone_id'], None, time.time())
                    del self.tracks[tid]

        # Step 2: Create new tracks for unmatched detections
        for j, assigned in enumerate(det_assigned):
            if not assigned:
                cx, cy = det_centroids[j]
                new_zone = None
                for z in zones:
                    if point_in_polygon((cx, cy), z['polygon']):
                        new_zone = z['id']
                        break
                tid = self.next_id
                self.next_id += 1
                self.tracks[tid] = {
                    'centroid': (cx, cy),
                    'zone_id': None,  # set via event
                    'missed': 0,
                    'trace': deque([], maxlen=15)
                }
                # Immediately fire enter event if inside a zone
                if new_zone:
                    on_event('enter', tid, None, new_zone, time.time())
                    self.tracks[tid]['zone_id'] = new_zone
                self.tracks[tid]['trace'].append((cx, cy))

        return self.tracks


def draw_overlay(frame, zones, tracks, detections, labels, fps, show_ids=True):
    # Draw zones
    for z in zones:
        color = z.get('color', (0, 255, 255))
        cv2.polylines(frame, [np.array(z['polygon'], dtype=np.int32)], True, color, 2)
        # Put label at first vertex
        x, y = z['polygon'][0]
        cv2.putText(frame, f"Zone {z['id']} - {z.get('name', '')}", (x, y - 8),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2, cv2.LINE_AA)

    # Draw detections
    for det in detections:
        cx, cy, (x1, y1, x2, y2), score, cls = det
        color = (0, 255, 0) if cls == 'person' else (255, 255, 0)
        cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)
        cv2.circle(frame, (int(cx), int(cy)), 4, color, -1)
        label = f"{cls}:{score:.2f}"
        cv2.putText(frame, label, (x1, y1 - 6), cv2.FONT_HERSHEY_SIMPLEX, 0.6, color, 2, cv2.LINE_AA)

    # Draw tracks with IDs and trails
    for tid, t in tracks.items():
        c = (int(t['centroid'][0]), int(t['centroid'][1]))
        col = (255, 0, 255)
        cv2.circle(frame, c, 5, col, -1)
        if show_ids:
            ztxt = t['zone_id'] if t['zone_id'] else "-"
            cv2.putText(frame, f"ID {tid} Z:{ztxt}", (c[0] + 6, c[1] - 6),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, col, 2, cv2.LINE_AA)
        # Trails
        pts = list(t['trace'])
        for i in range(1, len(pts)):
            cv2.line(frame, (int(pts[i - 1][0]), int(pts[i - 1][1])),
                     (int(pts[i][0]), int(pts[i][1])), (200, 0, 200), 2)

    # FPS indicator
    cv2.putText(frame, f"FPS: {fps:.1f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                (0, 255, 255), 2, cv2.LINE_AA)


def main():
    ap = argparse.ArgumentParser(description="OpenCV + Coral person detection with configurable zones")
    ap.add_argument("--model", default="models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite")
    ap.add_argument("--labels", default="models/coco_labels.txt")
    ap.add_argument("--zones", default="zones.json", help="JSON with frame_size and zones array")
    ap.add_argument("--res", default="1280x720", help="Camera resolution WxH, e.g., 1280x720")
    ap.add_argument("--th", type=float, default=0.5, help="Detection score threshold")
    ap.add_argument("--show", action="store_true", help="Display OpenCV window")
    ap.add_argument("--maxdist", type=int, default=80, help="Max pixel distance for centroid tracker")
    ap.add_argument("--maxmiss", type=int, default=12, help="Frames before track removal")
    ap.add_argument("--edgetpu", default="usb", choices=["usb"], help="Edge TPU device spec")
    args = ap.parse_args()

    # Load labels
    labels_map = load_labels(args.labels)

    # Build interpreter
    interpreter = make_interpreter(f"{args.model}@{args.edgetpu}")
    interpreter.allocate_tensors()
    in_w, in_h = common.input_size(interpreter)

    # Camera init
    frame_w, frame_h = [int(x) for x in args.res.lower().split("x")]
    picam2 = Picamera2()
    config = picam2.create_video_configuration(
        main={"size": (frame_w, frame_h), "format": "RGB888"},
        controls={"FrameRate": 30}
    )
    picam2.configure(config)
    picam2.start()

    # Zones
    with open(args.zones, "r") as f:
        zcfg = json.load(f)
    base_w, base_h = zcfg.get("frame_size", [frame_w, frame_h])
    zones = []
    for z in zcfg["zones"]:
        zones.append({
            "id": z["id"],
            "name": z.get("name", ""),
            "polygon": scale_polygon(z["polygon"], (base_w, base_h), (frame_w, frame_h)),
            "color": tuple(z.get("color", [0, 255, 255]))
        })

    # Tracker
    tracker = CentroidTracker(max_distance=args.maxdist, max_missed=args.maxmiss)

    # Event callback
    def on_event(ev_type, track_id, from_zone, to_zone, ts):
        tstr = time.strftime("%Y-%m-%dT%H:%M:%S", time.localtime(ts))
        if ev_type == 'enter':
            print(f"{tstr}Z ENTER track={track_id} zone={to_zone}")
        elif ev_type == 'exit':
            print(f"{tstr}Z EXIT  track={track_id} zone={from_zone}")
        sys.stdout.flush()

    # FPS measurement
    t_prev = time.time()
    fps = 0.0
    frame_index = 0

    window_name = "opencv-coral-person-detection-zones"
    if args.show:
        cv2.namedWindow(window_name, cv2.WINDOW_NORMAL)
        cv2.resizeWindow(window_name, frame_w, frame_h)

    try:
        while True:
            frame = picam2.capture_array()  # RGB888
            frame_index += 1

            # Prepare input for model: resize to input size (e.g., 300x300) and convert to uint8
            img_rgb = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)  # BGR for OpenCV drawing later; convert back for model
            img_for_model = cv2.resize(frame, (in_w, in_h))
            common.set_input(interpreter, img_for_model)

            # Inference
            interpreter.invoke()

            # Parse detections
            objs = detect.get_objects(interpreter, score_threshold=args.th)
            detections = []
            scale_x = frame_w / float(in_w)
            scale_y = frame_h / float(in_h)
            for obj in objs:
                cls_name = labels_map.get(obj.id, str(obj.id))
                if cls_name != "person":
                    continue
                bbox = obj.bbox  # BoundingBox: x,y,w,h on model input coords
                x1 = int(bbox.xmin * scale_x)
                y1 = int(bbox.ymin * scale_y)
                x2 = int((bbox.xmin + bbox.width) * scale_x)
                y2 = int((bbox.ymin + bbox.height) * scale_y)
                cx = (x1 + x2) / 2.0
                cy = (y1 + y2) / 2.0
                detections.append((cx, cy, (x1, y1, x2, y2), obj.score, cls_name))

            # Update tracker and zones
            tracks = tracker.update(detections, zones, frame_index, on_event)

            # Draw overlay
            draw_overlay(img_rgb, zones, tracks, detections, labels_map, fps)

            # Show or write
            if args.show:
                cv2.imshow(window_name, img_rgb)
                key = cv2.waitKey(1) & 0xFF
                if key == ord('q'):
                    break

            # FPS
            now = time.time()
            dt = now - t_prev
            if dt >= 0.5:
                fps = (1.0 / dt) if dt > 0 else fps
                t_prev = now

    except KeyboardInterrupt:
        pass
    finally:
        picam2.stop()
        if args.show:
            cv2.destroyAllWindows()


if __name__ == "__main__":
    main()

Make it executable:

chmod +x ~/opencv-coral-zones/app.py

Build/Flash/Run commands

No flashing required (this is not a microcontroller). Build steps here mean environment and assets preparation.

1) Verify OS and architecture

cat /etc/os-release | grep PRETTY_NAME
uname -m
# Expect: PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
# Expect: aarch64

2) Validate camera

libcamera-hello -t 5000

3) Validate Coral

lsusb | grep -i google
python -c "from pycoral.utils.edgetpu import list_edge_tpus; print(list_edge_tpus())"
# Expect a list with one USB Edge TPU device

4) Activate environment and run

cd ~/opencv-coral-zones
source .venv/bin/activate
python app.py --res 1280x720 --th 0.5 --show

5) Headless run (no GUI window)

python app.py --res 1280x720 --th 0.5

6) Optional: run at boot as a systemd service (headless)
– Create service file:

sudo tee /etc/systemd/system/coral-zones.service > /dev/null << 'EOF'
[Unit]
Description=OpenCV Coral Person Detection Zones
After=network-online.target

[Service]
ExecStart=/home/pi/opencv-coral-zones/.venv/bin/python /home/pi/opencv-coral-zones/app.py --res 1280x720 --th 0.5
User=pi
WorkingDirectory=/home/pi/opencv-coral-zones
Restart=on-failure
Environment=PYTHONUNBUFFERED=1

[Install]
WantedBy=multi-user.target
EOF
  • Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable coral-zones.service
sudo systemctl start coral-zones.service
journalctl -u coral-zones.service -f

Step-by-step Validation

1) Camera stack sanity check
– Command:
libcamera-hello -t 3000
– Pass criteria: A preview window opens for 3 seconds without errors. If it fails, go to Troubleshooting.

2) Coral runtime sanity check
– Commands:
lsusb | grep -i google
dpkg -l | grep libedgetpu1
python -c "from pycoral.utils.edgetpu import list_edge_tpus; print(list_edge_tpus())"

– Pass criteria: The USB device is listed and the Python statement prints at least one device.

3) Model and labels are accessible
– Commands:
test -f ~/opencv-coral-zones/models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite && echo "Model OK"
test -f ~/opencv-coral-zones/models/coco_labels.txt && echo "Labels OK"

4) Virtual environment imports
– Commands:
cd ~/opencv-coral-zones
source .venv/bin/activate
python -c "import cv2, numpy, pycoral; import picamera2; print('Imports OK')"

– Pass criteria: No ImportError exceptions.

5) Test app in GUI mode
– Command:
python app.py --res 1280x720 --th 0.5 --show
– Expected behavior:
– A window opens with the camera feed.
– Two polygons labeled Zone A (Entrance) and Zone B (Counter) are drawn.
– When a person appears, green bounding boxes and centroids are shown.
– Terminal logs events like:
2025-11-03T14:12:20Z ENTER track=3 zone=A
2025-11-03T14:12:22Z EXIT track=3 zone=A
2025-11-03T14:12:24Z ENTER track=3 zone=B

– Press q to quit.

6) Validate zone scaling
– If your preview resolution is not 1280×720, edit zones.json «frame_size» to the base used to draw polygons (default 1280×720).
– The app scales polygons to your –res at runtime. Move within each physical zone to ensure events fire in the console.

7) Validate headless logging
– Command:
python app.py --res 1280x720 --th 0.5
– Expected behavior: No window opens; terminal prints ENTER/EXIT events when a person moves between zones.

8) Quick accuracy check
– Stand in the scene and move deliberately across the zone boundaries (edge to center). Detections should be stable and consistent. Adjust lens focus and exposure if detections are missed:
– Focus: adjust lens ring.
– Scene light: ensure adequate illumination.
– Threshold: try –th 0.4 for more sensitivity.

Troubleshooting

  • Camera not detected:
  • Check ribbon cable orientation at the CAMERA connector.
  • Ensure /boot/firmware/config.txt contains:
    camera_auto_detect=1
    dtoverlay=imx477
  • Reboot and run:
    dmesg | grep -i imx477
    libcamera-hello -t 3000
  • Black or laggy preview:
  • Reduce resolution: use –res 1280×720 or 960×540.
  • Ensure GPU memory is adequate (Bookworm typically handles this automatically).
  • Coral not found / slow:
  • Use a blue USB 3.0 port.
  • Confirm:
    lsusb | grep -i google
    python -c "from pycoral.utils.edgetpu import list_edge_tpus; print(list_edge_tpus())"
  • If intermittent, power may be insufficient; use the official 5V/3A PSU and avoid bus-powered hubs.
  • pycoral or tflite-runtime import errors:
  • Re-activate venv:
    source ~/opencv-coral-zones/.venv/bin/activate
  • Reinstall:
    pip install --force-reinstall tflite-runtime==2.12.0 pycoral==2.0.0
  • OpenCV GUI window doesn’t appear:
  • If running over SSH, ensure X11 forwarding is enabled (or use local desktop).
  • Use headless mode:
    python app.py --res 1280x720 --th 0.5
  • Alternatively install headless OpenCV:
    pip uninstall -y opencv-python
    pip install opencv-python-headless==4.8.1.78
  • Incorrect labels (no “person” class):
  • Ensure you downloaded coco_labels.txt corresponding to COCO. The person class should be “person”.
  • Verify filtering in app.py keeps only detections where cls_name == «person».
  • Spurious zone enter/exit flicker:
  • Increase association tolerance:
    --maxdist 100 --maxmiss 18
  • Smooth detections by raising threshold:
    --th 0.6
  • Slightly enlarge zones to avoid border jitter.
  • Performance tuning:
  • Lower camera resolution (–res 960×540).
  • Prefer libedgetpu1-max (with caution):
    sudo apt install libedgetpu1-max
    Then rerun. Watch thermals and power.
  • Picamera2 import fails in venv:
  • Ensure venv was created with –system-site-packages.
  • If not, recreate:
    cd ~/opencv-coral-zones
    deactivate 2>/dev/null || true
    rm -rf .venv
    python3 -m venv --system-site-packages .venv
    source .venv/bin/activate

Improvements

  • Multi-threaded pipeline:
  • Run capture, inference, and drawing in separate threads/queues for higher FPS and smoother UI.
  • Persistent logging and analytics:
  • Write ENTER/EXIT events to a SQLite database or CSV with timestamps and zone IDs.
  • Aggregate per-zone dwell time and counts.
  • Calibratable zones:
  • Add an editor mode to click points and save zones.json interactively.
  • Multiple Coral devices:
  • Scale to multiple Edge TPUs; shard frames or process higher FPS.
  • Dedicated person-only model:
  • Use a person-only model compiled for Edge TPU (e.g., MobileNet-SSD persons) to reduce false positives and improve speed.
  • Hardware sync and triggers:
  • Use GPIO outputs (gpiozero) to trigger lights or relays when a zone is occupied.
  • Stream output:
  • Publish annotated frames via RTSP or MJPEG for remote monitoring.
  • Thermal stability:
  • Add heatsinks and a fan for the Pi 4 and the Coral for long-duration deployments.

Final Checklist

  • Raspberry Pi 4 Model B powered with official 5V/3A supply.
  • Raspberry Pi HQ Camera (IMX477) connected to the CAMERA CSI port; lens focused on scene.
  • Google Coral USB Accelerator connected to a blue USB 3.0 port.
  • Raspberry Pi OS Bookworm 64‑bit installed and updated.
  • Camera interface enabled (raspi-config or /boot/firmware/config.txt with dtoverlay=imx477).
  • Coral runtime installed:
  • libedgetpu1-std from coral-edgetpu-stable APT repo.
  • Project directory prepared:
  • ~/opencv-coral-zones/.venv virtual environment with –system-site-packages.
  • pip packages installed: opencv-python (or headless), numpy, tflite-runtime==2.12.0, pycoral==2.0.0.
  • models directory with ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite and coco_labels.txt.
  • zones.json created with correct frame_size and polygons.
  • app.py executable and tested.
  • Validation complete:
  • libcamera-hello preview works.
  • Edge TPU detected via list_edge_tpus().
  • Application runs, shows/draws zones, detects “person”, and logs ENTER/EXIT events.
  • Optional:
  • systemd service configured for headless autostart.
  • Tuning parameters (threshold, maxdist, maxmiss) adjusted for your scene.

With these steps, you have a complete, reproducible Advanced-level “opencv-coral-person-detection-zones” solution on the exact device model: Raspberry Pi 4 Model B + Raspberry Pi HQ Camera (IMX477) + Google Coral USB (Edge TPU).

Find this product and/or books on this topic on Amazon

Go to Amazon

As an Amazon Associate, I earn from qualifying purchases. If you buy through this link, you help keep this project running.

Quick Quiz

Question 1: What is the target device family for the project?




Question 2: Which model of Raspberry Pi is used in this project?




Question 3: What camera model is specified for the project?




Question 4: Which language is used for the application development?




Question 5: What is the minimum recommended power supply for the Raspberry Pi?




Question 6: What is the main objective of the application?




Question 7: Which library is used for drawing zone polygons and annotating detections?




Question 8: What type of lens is required for the Raspberry Pi HQ Camera?




Question 9: What software is recommended to be familiar with before starting the project?




Question 10: What is the purpose of the Google Coral USB Accelerator in this project?




Carlos Núñez Zorrilla
Carlos Núñez Zorrilla
Electronics & Computer Engineer

Telecommunications Electronics Engineer and Computer Engineer (official degrees in Spain).

Follow me:
error: Contenido Protegido / Content is protected !!
Scroll to Top