![]() |
Parking Finder
The Parking Finder system is a smart parking solution designed to help drivers efficiently locate available
|
@breif Smart parking solution designed to help drivers efficiently locate available parking spaces at busy locations, such as university campus. More...
Go to the source code of this file.
Classes | |
| class | find_distance.ROIEditor |
| Interactive polygon editor for defining parking spot ROIs at runtime. More... | |
| class | find_distance.CalibWindow |
| Live calibration window with OpenCV trackbars for tuning CALIB_PARAMS. More... | |
Functions | |
| find_distance.set_jetson_clocks (bool enable) | |
| Lock Jetson hardware clocks at maximum frequency for consistent inference latency. | |
| find_distance.parse_args () | |
| Parse command-line arguments. | |
| np.ndarray | find_distance.shm_ndarray (SharedMemory shm, tuple shape) |
| Create a NumPy array view backed by a SharedMemory block. | |
| find_distance.get_detections () | |
| Flask REST endpoint — returns current parking spot occupancy states. | |
| find_distance.check_parking_spots (str cam_name, list disp_boxes) | |
| Determine occupancy state for each parking spot ROI on a given camera. | |
| find_distance.annotate_frame (np.ndarray frame, list disp_boxes, list scores, list names, str cam_name) | |
| Draw YOLO detection boxes and confidence labels directly onto a frame (in-place). | |
| np.ndarray | find_distance.draw_spot_overlays (np.ndarray frame, str cam_name) |
| Blend semi transparent ROI polygon overlays onto a camera frame. | |
| list[dict] | find_distance.generate_rois (str cam_id, tuple vanishing_point, int near_y, int far_y, int left_x, int right_x, int n_spots=5, int n_rows=1, float fisheye_distortion=0.0, float perspective_strength=1.0, str name=None, list spot_boundaries=None, **kwargs) |
| Procedurally generate perspective correct parking spot ROI polygons. | |
| list[dict] | find_distance.auto_caliberate_rois (str cam_name, np.ndarray frame, dict current_params, list detections=None) |
| Automatically recalibrate ROI polygons for a camera using edge detection. | |
| find_distance.rebuild_spot_masks (str cam_name, list[dict] new_rois, bool reset_states=False) | |
| Rebuild the in-memory ROI and mask tables for a camera after calibration. | |
| find_distance.capture_worker (int cam_id, int src, str raw_shm_name, raw_lock, raw_frame_id, frame_ready_event, stop_event) | |
| Camera capture worker — runs as a separate Process per camera. | |
| find_distance.inference_loop (bool need_annotations, frame_ready_event, stop_event) | |
| YOLO inference thread — batches frames from all cameras and runs detection. | |
| find_distance.display_loop (stop_event) | |
| Display loop — renders the live camera feed with overlays in a GUI window. | |
| list[float] | find_distance.blend_boundaries (list[float] detected, list[float] manual, float manual_weight=0.15) |
| find_distance.record_loop (stop_event, use_anns=False) | |
| Recording loop — writes camera frames to timestamped MP4 files. | |
| find_distance.auto_calibrate_loop (stop_event) | |
| Background auto-calibration thread — periodically re-runs ROI calibration. | |
| list[float] | find_distance.merge_narrow_spots (list[float] boundaries, float min_width=80.0) |
| Remove spot boundaries that would produce slots narrower than min_width pixels. | |
| tuple[list[float], int] | find_distance.detect_spot_boundaries (np.ndarray frame, int near_y, int far_y, int left_x, int right_x, int n_spots, list detections=None, str cam_name=None) |
| Detect parking spot divider X positions from lane markings in a camera frame. | |
| find_distance._try_from_cars (detections, near_y, left_x, right_x, n_spots, fallback) | |
| Fallback boundary estimator that infers spot dividers from car center positions. | |
| find_distance.shutdown () | |
| atexit handler — gracefully shuts down all workers and frees shared memory. | |
Variables | |
| find_distance.benchmark | |
| find_distance.enabled | |
| str | find_distance.JETSON_CLOCKS_CONF = '/tmp/jetson_clocks_backup.conf' |
| Temporary file path used to store the original jetson clock state before locking. | |
| int | find_distance.CAP_W = 640 |
| capture frame width in pixels. | |
| int | find_distance.CAP_H = 480 |
| Capture frame height in pixels. | |
| int | find_distance.DISP_W = 640 |
| Display frame width in pixels. | |
| int | find_distance.DISP_H = 480 |
| Display frame height in pixels. | |
| tuple | find_distance.CAP_SHAPE = (CAP_H, CAP_W, 3) |
| NumPy shape tuple for a raw capture frame (H, W, 3). | |
| tuple | find_distance.DISP_SHAPE = (DISP_H, DISP_W, 3) |
| NumPy shape tuple for a display frame (H, W, 3). | |
| list | find_distance.SOURCES = ['./recordings/left_side_cam.mp4', './recordings/right_side_ccam.mp4'] |
| Video sources for each camera. | |
| int | find_distance.CAPTURE_FPS = 30 |
| Target capture framerate for live cameras and recording output. | |
| int | find_distance.INFERENCEFPS = 15 |
| Maximum inference framerate. | |
| int | find_distance.IMGSZ = 128 |
| Input image size (square) fed to the YOLO model in pixels. | |
| float | find_distance.CONF = 0.25 |
| YOLO detection confidence threshold. | |
| str | find_distance.MODEL_PATH = "./models/yolo11n.engine" |
| Path to the YOLO model file. | |
| int | find_distance.MAX_BATCH = 3 |
| Maximum batch size passed to the YOLO model per inference call. | |
| list | find_distance.CLASSES = [2, 3, 5, 7] |
| COCO class IDs to detect. | |
| list | find_distance.CAM_ORDER = ['Left', 'Right'] |
| Ordered list of camera name strings. | |
| list | find_distance.DISPLAY_ORDER = [0, 1] |
| Order in which camera panels are stitched horizontally in the display window. | |
| list | find_distance.INF_IDX = [0, 1] |
| Camera indices that are sent to the inference pipeline. | |
| dict | find_distance.SPOT_CAMS = {'Left', 'Right'} |
| Set of camera names that have parking spot ROIs defined and should be checked for occupancy. | |
| dict | find_distance.MIN_BOX_H = {'Left': 30, 'Right': 30} |
| Per camera minimum bounding box height in pixels. | |
| float | find_distance.INTERSECT_ALLOWANCE = 0.15 |
| Minimum fraction of a spot's mask area that a bounding box must overlap to mark the spot as occupied. | |
| int | find_distance.AUTO_CALIBRATE_INTERVAL = 0 |
| Seconds between automatic ROI recalibration passes. | |
| dict | find_distance.ROIS |
| Per camera list of parking spot definitions. | |
| dict | find_distance.SPOT_MASK = {} |
| Per camera list of boolean NumPy arrays (DISP_H x DISP_W) pre-rasterised from ROIS. | |
| dict | find_distance.CALIB_PARAMS |
| Per camera calibration parameters used by generate_rois() and auto_caliberate_rois(). | |
| find_distance.m = np.zeros((DISP_H, DISP_W), dtype=np.uint8) | |
| tuple | find_distance.OCCUPIED_COLOR = (0, 0, 220) |
| BGR colour used to fill occupied spot overlays (red tint). | |
| tuple | find_distance.EMPTY_COLOR = (0, 220, 80) |
| BGR colour used to fill empty spot overlays (green tint). | |
| float | find_distance.SPOT_ALPHA = 0.25 |
| Opacity of the spot overlay blend. | |
| find_distance.store_lock = threading.Lock() | |
| Threading lock protecting all shared state variables (spot_states, _last_boxes, etc.) from concurrent reads/writes by the inference and display threads. | |
| dict | find_distance.spot_states |
| Per-camera list of occupancy strings for each spot. | |
| list | find_distance.RAW_SHM_OBJS = [] |
| SharedMemory objects holding raw (unannotated) camera frames. | |
| list | find_distance.ANN_SHM_OBJS = [] |
| SharedMemory objects holding annotated frames written by the inference thread. | |
| list | find_distance.RAW_BUFS = [] |
| NumPy arrays mapped onto RAW_SHM_OBJS for zero copy frame access. | |
| list | find_distance.ANN_BUFS = [] |
| NumPy arrays mapped onto ANN_SHM_OBJS for zero copy annotated frame access. | |
| list | find_distance.ANN_LOCKS = [] |
| Per camera multiprocessing Locks protecting ANN_BUFS entries. | |
| list | find_distance.RAW_LOCKS = [] |
| Per-camera multiprocessing Locks protecting RAW_BUFS entries. | |
| list | find_distance.RAW_FRAME_ID = [] |
| Per camera shared Value('i') counters incremented each time a new frame is written. | |
| list | find_distance.PROCESSES = [] |
| List of capture worker Process objects, kept for clean shutdown. | |
| int | find_distance._MAIN_PID = 0 |
| PID of the main process. | |
| find_distance.STOP_EVENT = Event() | |
| Multiprocessing Event shared with all worker processes and threads. | |
| dict | find_distance._last_boxes = {cam: [] for cam in CAM_ORDER} |
| Per camera cache of the most recent raw bounding boxes (display coordinates). | |
| dict | find_distance._last_ann_boxes = {cam: [] for cam in CAM_ORDER} |
| Per camera cache of (box, score, name) tuples from the last inference pass. | |
| dict | find_distance._empty_streak = {cam: [] for cam in CAM_ORDER} |
| Per camera, per spot counter of consecutive inference frames with no detection. | |
| dict | find_distance._manual_bounds = {cam: [] for cam in CAM_ORDER} |
| Per camera list of manually set spot boundary X positions from the CalibWindow. | |
| dict | find_distance._debug_frames = {} |
| Per camera debug visualisation frames showing detected edges and boundary lines. | |
| int | find_distance.EMPTY_CONFIRM_FRAMES = 3 |
| Number of consecutive inference frames a spot must appear empty before its state is changed from 'Car' to 'Empty'. | |
| find_distance.app = Flask(__name__) | |
| find_distance.roi_editor = ROIEditor() | |
| find_distance.calib_window = CalibWindow() | |
| find_distance.args = parse_args() | |
| find_distance.frame_ready_event = Event() | |
| find_distance.need_annotation = args.test or args.record | |
| float | find_distance.usb_1 = 2.1 |
| float | find_distance.usb_2 = 2.2 |
| find_distance.line = f.readline() | |
| find_distance.port = float(re.search(r'\d+\.\d+', line).group()) | |
| find_distance.idx = int(re.search(r'\d+', line).group()) | |
| find_distance.probe = cv2.VideoCapture(src) | |
| find_distance.ok | |
| find_distance._ | |
| find_distance.raw = SharedMemory(create=True, size=CAP_H * CAP_W * 3) | |
| find_distance.ann = SharedMemory(create=True, size=DISP_H * DISP_W * 3) | |
| find_distance.p | |
| int | find_distance.deadline = time.time() + 15 |
| find_distance.auto_calibrate_t | |
| find_distance.inf_t | |
| find_distance.target | |
| find_distance.daemon | |
| find_distance.rec_t = Thread(target=record_loop, args=(STOP_EVENT, args.annotate), daemon=True) | |
@breif Smart parking solution designed to help drivers efficiently locate available parking spaces at busy locations, such as university campus.
Captures frames from two cameras (left and right), runs batched YOLO inference to detect vehicle, and determines parking spot occupancy by checking bounding box ovelap against pre-defined ROI polygons. Results are streamed via a Flask REST API, with optional visualization and recording functionality.
Machine: Jetson Orin Nano CPU core layout core 0 - Main / Flask API core 1 - camera 0 capture process core 2 - camera 1 capture process core 3 - Auto Calibration thread core 4 - Yolo inference thread core 5 - Display / Record thread
| Key | Action |
|---|---|
| I | Toggle editor on/off |
| Left click | Add point to current polygon |
| Right click | Undo last point |
| Enter | Finish polygon and print coordinates to terminal |
| Backspace | Clear all points for current polygon |
| Tab | Cycle active camera (Left / Right) |
| Q | Quit |
Definition in file find_distance.py.
|
protected |
Fallback boundary estimator that infers spot dividers from car center positions.
Used when lane marking detection finds too few divider lines. Takes the horizontal centers of detected cars that are close to near_y and places divider boundaries at the midpoints between adjacent cars. Falls back to a uniform layout (after merge_narrow_spots cleaning) if fewer than 2 car centers are available or if the resulting boundary count does not match n_spots+1.
| detections | List of [x1,y1,x2,y2] detection boxes, or None. |
| near_y | Y pixel of the near edge used to filter relevant detections. |
| left_x | Left boundary X pixel. |
| right_x | Right boundary X pixel. |
| n_spots | Expected number of parking spots. |
| fallback | Uniform boundary list used when car-based estimation fails. |
Definition at line 1857 of file find_distance.py.
| find_distance.annotate_frame | ( | np.ndarray | frame, |
| list | disp_boxes, | ||
| list | scores, | ||
| list | names, | ||
| str | cam_name ) |
Draw YOLO detection boxes and confidence labels directly onto a frame (in-place).
Iterates over the paired box/score/name lists and draws a filled rectangle label above each bounding box. Boxes shorter than MIN_BOX_H are skipped to ignore noise from distant or partial detections. Modifies the frame array in-place to avoid an extra memory allocation.
| frame | BGR uint8 NumPy array to annotate. Modified in-place. |
| disp_boxes | List of [x1, y1, x2, y2] bounding boxes in display coordinates. |
| scores | Confidence score (float) for each box. |
| names | COCO class name string for each box (e.g. "car", "truck"). |
| cam_name | Camera identifier used to look up the MIN_BOX_H threshold. |
Draw annotation directly on frames, no copy less memory overhead
Definition at line 430 of file find_distance.py.
| list[dict] find_distance.auto_caliberate_rois | ( | str | cam_name, |
| np.ndarray | frame, | ||
| dict | current_params, | ||
| list | detections = None ) |
Automatically recalibrate ROI polygons for a camera using edge detection.
Analyses a snapshot frame using the following pipeline:
Updates CALIB_PARAMS in-place and syncs the CalibWindow trackbars if open. Returns None and prints a warning if insufficient lines are detected.
| cam_name | Camera identifier string, e.g. "Left". |
| frame | BGR display-resolution frame to analyse. |
| current_params | Current CALIB_PARAMS dict for this camera. |
| detections | Optional list of [x1,y1,x2,y2] boxes to exclude car edges from the line analysis. |
Definition at line 649 of file find_distance.py.
| find_distance.auto_calibrate_loop | ( | stop_event | ) |
Background auto-calibration thread — periodically re-runs ROI calibration.
Runs as a daemon Thread pinned to CPU core 3. Sleeps for AUTO_CALIBRATE_INTERVAL seconds between passes (set to 0 to disable). On each pass it iterates over all SPOT_CAMS, skips cameras where the CalibWindow is open (to avoid fighting the user's manual adjustments), and calls auto_caliberate_rois() on the latest snapshot. If calibration succeeds, rebuild_spot_masks() is called to apply the new ROIs immediately.
Skips cameras with more than 3 simultaneous detections, as a heavily occupied lot provides poor lane-marking signal for edge detection.
| stop_event | Multiprocessing Event polled to exit the loop. |
Definition at line 1653 of file find_distance.py.
| list[float] find_distance.blend_boundaries | ( | list[float] | detected, |
| list[float] | manual, | ||
| float | manual_weight = 0.15 ) |
Definition at line 1421 of file find_distance.py.
| find_distance.capture_worker | ( | int | cam_id, |
| int | src, | ||
| str | raw_shm_name, | ||
| raw_lock, | |||
| raw_frame_id, | |||
| frame_ready_event, | |||
| stop_event ) |
Camera capture worker — runs as a separate Process per camera.
Opens the video source (V4L2 device or file), configures resolution and FPS, then loops reading frames and writing them into the shared memory buffer. After each write it increments raw_frame_id and sets frame_ready_event to immediately wake the inference thread without polling.
For live cameras, YUYV format and a buffer size of 1 are set to always deliver the most recent frame. For file sources, the video rewinds at EOF. The process ignores SIGINT so Ctrl+C is handled only by the main process via STOP_EVENT.
Pinned to CPU core (cam_id + 1) by the main process after spawning.
| cam_id | Zero-based camera index. |
| src | V4L2 device index (int) or video file path (str). |
| raw_shm_name | Name of the shared memory block to write frames into. |
| raw_lock | Multiprocessing Lock protecting the shared buffer. |
| raw_frame_id | Shared Value('i') incremented on each new frame. |
| frame_ready_event | Multiprocessing Event set after each frame write. |
| stop_event | Multiprocessing Event polled to trigger shutdown. |
Definition at line 1065 of file find_distance.py.
| find_distance.check_parking_spots | ( | str | cam_name, |
| list | disp_boxes ) |
Determine occupancy state for each parking spot ROI on a given camera.
For every bounding box received from the inference thread, a boolean pixel mask is created and compared against each pre-rasterised spot mask using bitwise AND. A spot is marked 'Car' if the overlap pixel count exceeds INTERSECT_ALLOWANCE multiplied by the total spot mask area. Each car is assigned to the single spot it overlaps the most, preventing one large vehicle from claiming multiple spots.
To avoid flickering when a car is momentarily missed by the detector, a spot only transitions back to 'Empty' after EMPTY_CONFIRM_FRAMES consecutive frames with no detection, tracked via _empty_streak.
| cam_name | Camera identifier string, e.g. "Left" or "Right". |
| disp_boxes | List of [x1, y1, x2, y2] bounding boxes in display pixel coordinates. |
Spot is occupied if bounding box overlaps 10% of its ROI
Definition at line 362 of file find_distance.py.
| tuple[list[float], int] find_distance.detect_spot_boundaries | ( | np.ndarray | frame, |
| int | near_y, | ||
| int | far_y, | ||
| int | left_x, | ||
| int | right_x, | ||
| int | n_spots, | ||
| list | detections = None, | ||
| str | cam_name = None ) |
Detect parking spot divider X positions from lane markings in a camera frame.
Pipeline:
When CALIB_DEBUG is set, saves an annotated debug frame to _debug_frames.
| frame | BGR display-resolution frame to analyse. |
| near_y | Y pixel of the near edge of the parking area. |
| far_y | Y pixel of the far edge of the parking area. |
| left_x | Left boundary X pixel. |
| right_x | Right boundary X pixel. |
| n_spots | Expected number of parking spots. |
| detections | Optional list of [x1,y1,x2,y2] detection boxes for fallback. |
| cam_name | Camera name used for debug frame storage. |
Definition at line 1748 of file find_distance.py.
| find_distance.display_loop | ( | stop_event | ) |
Display loop — renders the live camera feed with overlays in a GUI window.
Runs in the main thread (or a dedicated thread) when -T / –test is passed. Pinned to CPU core 5. Creates a single OpenCV window and stitches all camera panels side by side each frame. For each panel it:
After stitching, roi_editor.draw_overlay() is called to paint the ROI editor UI if it is active. An optional CALIB_DEBUG window shows edge detection output.
Keyboard bindings (in addition to ROI editor keys):
| stop_event | Multiprocessing Event polled to exit the loop. |
Displays what the cams see. Press I to toggle the interactive ROI editor.
Definition at line 1290 of file find_distance.py.
| np.ndarray find_distance.draw_spot_overlays | ( | np.ndarray | frame, |
| str | cam_name ) |
Blend semi transparent ROI polygon overlays onto a camera frame.
For each defined parking spot, a filled polygon is drawn onto a copy of the frame using the occupancy-dependent colour (OCCUPIED_COLOR or EMPTY_COLOR), then blended back onto the original using cv2.addWeighted with SPOT_ALPHA opacity. A spot ID and state label is drawn at the polygon centroid.
| frame | BGR uint8 NumPy array to draw onto. Not modified in-place; a blended copy is returned. |
| cam_name | Camera identifier used to look up ROIS and spot_states. |
Blend ROI polygons into the frame
Definition at line 460 of file find_distance.py.
| list[dict] find_distance.generate_rois | ( | str | cam_id, |
| tuple | vanishing_point, | ||
| int | near_y, | ||
| int | far_y, | ||
| int | left_x, | ||
| int | right_x, | ||
| int | n_spots = 5, | ||
| int | n_rows = 1, | ||
| float | fisheye_distortion = 0.0, | ||
| float | perspective_strength = 1.0, | ||
| str | name = None, | ||
| list | spot_boundaries = None, | ||
| ** | kwargs ) |
Procedurally generate perspective correct parking spot ROI polygons.
Computes ROI polygons that match real-world lane markings under camera perspective distortion. The algorithm works by interpolating horizontal boundary positions between a near Y row and a far Y row using a vanishing point to model convergence, then optionally applying a radial fisheye correction. Three internal helpers handle the geometry:
horizontal_bounds(y): Returns the left/right X extents at a given Y row, blending between a rectangular layout and full-perspective convergence toward the vanishing point using perspective_strength.
scale_x_to_y(x_at_near, y): Projects a single X position from the near row to any other Y row along the perspective gradient.
fisheye(x, y): Applies barrel/pincushion radial distortion correction. A positive fisheye_distortion value corrects barrel distortion (wide-angle lenses); negative corrects pincushion.
If spot_boundaries is provided as a list of (near_x, far_x) tuples, those explicit positions are used instead of uniform division.
| cam_id | Camera identifier string, used as spot ID prefix. |
| vanishing_point | (x, y) pixel coordinate of the perspective vanishing point. |
| near_y | Y pixel of the near (bottom) edge of the parking area. |
| far_y | Y pixel of the far (top) edge of the parking area. |
| left_x | Left boundary X pixel of the parking area. |
| right_x | Right boundary X pixel of the parking area. |
| n_spots | Number of parking spots per row (default 5). |
| n_rows | Number of rows of spots (default 1). |
| fisheye_distortion | Radial distortion coefficient. 0.0 disables correction. |
| perspective_strength | Blend between rect (0.0) and full-perspective (1.0) layout. |
| name | Optional spot ID prefix override (defaults to cam_id[0]). |
| spot_boundaries | Optional pre-computed boundary list. Length must be n_spots+1. |
Definition at line 526 of file find_distance.py.
| find_distance.get_detections | ( | ) |
Flask REST endpoint — returns current parking spot occupancy states.
Called by the front end or any HTTP client to poll occupancy without needing direct access to the process. Acquires store_lock briefly to get a consistent snapshot of spot_states.
Definition at line 340 of file find_distance.py.
| find_distance.inference_loop | ( | bool | need_annotations, |
| frame_ready_event, | |||
| stop_event ) |
YOLO inference thread — batches frames from all cameras and runs detection.
Runs as a daemon Thread pinned to CPU core 4. On startup it performs 5 warm-up inference passes with dummy frames to prime the TensorRT engine and avoid a latency spike on the first real frame.
Each iteration checks whether new frames are available by comparing raw_frame_id against last_seen. If new frames exist they are resized to IMGSZ, padded to MAX_BATCH with blank frames, and passed to the YOLO model in a single batched call. Detections are scaled back to display resolution using sx/sy factors.
Results are processed per camera:
Sleeps briefly and waits on frame_ready_event between inference cycles to avoid busy-waiting while respecting the INFERENCEFPS cap.
| need_annotations | If True, annotated frames are rendered and written to ANN_BUFS. |
| frame_ready_event | Event set by capture workers when a new frame is available. |
| stop_event | Event polled to trigger shutdown. |
Definition at line 1167 of file find_distance.py.
| list[float] find_distance.merge_narrow_spots | ( | list[float] | boundaries, |
| float | min_width = 80.0 ) |
Remove spot boundaries that would produce slots narrower than min_width pixels.
Iterates through the boundary list and skips any position that would create a slot width below min_width relative to the previous kept boundary. This prevents the ROI generator from producing tiny sliver polygons when line detection finds spurious closely-spaced dividers. The last boundary is always forced to match the original final value to preserve the overall parking area extent.
| boundaries | List of X boundary positions (floats), including left and right edges. |
| min_width | Minimum acceptable slot width in pixels (default 80.0). |
Definition at line 1705 of file find_distance.py.
| find_distance.parse_args | ( | ) |
Parse command-line arguments.
Supports three flags that can be combined freely:
Definition at line 101 of file find_distance.py.
| find_distance.rebuild_spot_masks | ( | str | cam_name, |
| list[dict] | new_rois, | ||
| bool | reset_states = False ) |
Rebuild the in-memory ROI and mask tables for a camera after calibration.
Re-rasterises each new ROI polygon into a boolean pixel mask and atomically replaces the global ROIS, SPOT_MASK, and spot_states entries under store_lock. If the number of spots has changed, spot_states and _empty_streak are reset to avoid index mismatches.
| cam_name | Camera identifier string. |
| new_rois | New list of ROI dicts from generate_rois() or auto_caliberate_rois(). |
| reset_states | If True, forces spot_states back to all 'Empty' regardless of whether the spot count changed. |
Definition at line 799 of file find_distance.py.
| find_distance.record_loop | ( | stop_event, | |
| use_anns = False ) |
Recording loop — writes camera frames to timestamped MP4 files.
Runs as a daemon Thread when -r / –record is passed. Pinned to CPU core 5 (shared with the display loop, which is absent in headless record mode). Creates one MP4 file per camera in the recordings/ directory, named with a timestamp prefix and the camera name.
When use_anns is True, annotated frames from ANN_BUFS are written instead of raw frames, burning in bounding boxes and ROI overlays permanently. Note that ANN_BUFS are only populated when need_annotations is True, which requires either -T or -r to be passed at startup.
| stop_event | Multiprocessing Event polled to stop recording and flush files. |
| use_anns | If True, record annotated frames; otherwise record raw frames. a |
Writes raw cam frames to the output file (Can be edited to write annotated frames)
Definition at line 1595 of file find_distance.py.
| find_distance.set_jetson_clocks | ( | bool | enable | ) |
Lock Jetson hardware clocks at maximum frequency for consistent inference latency.
Stores the current clock state to JETSON_CLOCKS_CONF before locking so it can be restored on shutdown. Should be called at startup (enable=True) and in the atexit handler (enable=False). Prints a coloured warning so the state change is visible in the terminal.
| enable | True to lock clocks at maximum; False to restore original state. |
Definition at line 81 of file find_distance.py.
| np.ndarray find_distance.shm_ndarray | ( | SharedMemory | shm, |
| tuple | shape ) |
Create a NumPy array view backed by a SharedMemory block.
No data is copied — the array directly references the shared memory buffer. All processes sharing the same SharedMemory name will see the same bytes.
| shm | An open SharedMemory object. |
| shape | Desired NumPy shape tuple, e.g. (480, 640, 3). |
Definition at line 314 of file find_distance.py.
| find_distance.shutdown | ( | ) |
atexit handler — gracefully shuts down all workers and frees shared memory.
Registered with atexit so it runs automatically on normal exit, KeyboardInterrupt, or unhandled exceptions. Only executes in the main process (guarded by _MAIN_PID) to prevent child capture processes from triggering a double-shutdown.
Sets STOP_EVENT, joins all capture processes with a 3-second timeout, unlinks all shared memory blocks, and restores Jetson clock settings.
Definition at line 1893 of file find_distance.py.
|
protected |
Definition at line 1947 of file find_distance.py.
|
protected |
Per camera debug visualisation frames showing detected edges and boundary lines.
Only populated when the CALIB_DEBUG environment variable is set.
Definition at line 300 of file find_distance.py.
|
protected |
Per camera, per spot counter of consecutive inference frames with no detection.
A spot transitions from 'Car' to 'Empty' only after EMPTY_CONFIRM_FRAMES consecutive empty frames, preventing flickering from momentary missed detections.
Definition at line 292 of file find_distance.py.
|
protected |
Per camera cache of (box, score, name) tuples from the last inference pass.
Used by the display loop to draw labels without re-running inference.
Definition at line 287 of file find_distance.py.
|
protected |
Per camera cache of the most recent raw bounding boxes (display coordinates).
Written by the inference thread, read by auto-calibration and the display loop.
Definition at line 283 of file find_distance.py.
|
protected |
PID of the main process.
Used in the atexit handler to avoid running shutdown logic in child processes.
Definition at line 275 of file find_distance.py.
|
protected |
Per camera list of manually set spot boundary X positions from the CalibWindow.
Blended into auto-calibrated boundaries with a small weight to stabilise results.
Definition at line 296 of file find_distance.py.
| find_distance.ann = SharedMemory(create=True, size=DISP_H * DISP_W * 3) |
Definition at line 1954 of file find_distance.py.
| list find_distance.ANN_BUFS = [] |
NumPy arrays mapped onto ANN_SHM_OBJS for zero copy annotated frame access.
Definition at line 258 of file find_distance.py.
| list find_distance.ANN_LOCKS = [] |
Per camera multiprocessing Locks protecting ANN_BUFS entries.
Definition at line 261 of file find_distance.py.
| list find_distance.ANN_SHM_OBJS = [] |
SharedMemory objects holding annotated frames written by the inference thread.
Definition at line 252 of file find_distance.py.
| find_distance.app = Flask(__name__) |
Definition at line 318 of file find_distance.py.
| find_distance.args = parse_args() |
Definition at line 1917 of file find_distance.py.
| int find_distance.AUTO_CALIBRATE_INTERVAL = 0 |
Seconds between automatic ROI recalibration passes.
Set to 0 to disable.
Definition at line 182 of file find_distance.py.
| find_distance.auto_calibrate_t |
Definition at line 1995 of file find_distance.py.
| find_distance.benchmark |
Definition at line 68 of file find_distance.py.
| dict find_distance.CALIB_PARAMS |
Per camera calibration parameters used by generate_rois() and auto_caliberate_rois().
Keys per camera:
Definition at line 210 of file find_distance.py.
| find_distance.calib_window = CalibWindow() |
Definition at line 1579 of file find_distance.py.
| list find_distance.CAM_ORDER = ['Left', 'Right'] |
Ordered list of camera name strings.
Index matches SOURCES and shared memory lists.
Definition at line 162 of file find_distance.py.
| int find_distance.CAP_H = 480 |
Capture frame height in pixels.
Definition at line 117 of file find_distance.py.
| tuple find_distance.CAP_SHAPE = (CAP_H, CAP_W, 3) |
NumPy shape tuple for a raw capture frame (H, W, 3).
Definition at line 126 of file find_distance.py.
| int find_distance.CAP_W = 640 |
capture frame width in pixels.
Definition at line 114 of file find_distance.py.
| int find_distance.CAPTURE_FPS = 30 |
Target capture framerate for live cameras and recording output.
Definition at line 138 of file find_distance.py.
| list find_distance.CLASSES = [2, 3, 5, 7] |
COCO class IDs to detect.
2=car, 3=motorcycle, 5=bus, 7=truck.
Definition at line 159 of file find_distance.py.
| float find_distance.CONF = 0.25 |
YOLO detection confidence threshold.
Detections below this are discarded
Definition at line 147 of file find_distance.py.
| find_distance.daemon |
Definition at line 2009 of file find_distance.py.
| int find_distance.deadline = time.time() + 15 |
Definition at line 1982 of file find_distance.py.
| int find_distance.DISP_H = 480 |
Display frame height in pixels.
Definition at line 123 of file find_distance.py.
| tuple find_distance.DISP_SHAPE = (DISP_H, DISP_W, 3) |
NumPy shape tuple for a display frame (H, W, 3).
Definition at line 129 of file find_distance.py.
| int find_distance.DISP_W = 640 |
Display frame width in pixels.
Definition at line 120 of file find_distance.py.
| list find_distance.DISPLAY_ORDER = [0, 1] |
Order in which camera panels are stitched horizontally in the display window.
Definition at line 165 of file find_distance.py.
| tuple find_distance.EMPTY_COLOR = (0, 220, 80) |
BGR colour used to fill empty spot overlays (green tint).
Definition at line 230 of file find_distance.py.
| int find_distance.EMPTY_CONFIRM_FRAMES = 3 |
Number of consecutive inference frames a spot must appear empty before its state is changed from 'Car' to 'Empty'.
Reduces false-empty flicker.
Definition at line 304 of file find_distance.py.
| find_distance.enabled |
Definition at line 69 of file find_distance.py.
| find_distance.frame_ready_event = Event() |
Definition at line 1919 of file find_distance.py.
| find_distance.idx = int(re.search(r'\d+', line).group()) |
Definition at line 1936 of file find_distance.py.
| int find_distance.IMGSZ = 128 |
Input image size (square) fed to the YOLO model in pixels.
Definition at line 144 of file find_distance.py.
| list find_distance.INF_IDX = [0, 1] |
Camera indices that are sent to the inference pipeline.
Definition at line 168 of file find_distance.py.
| find_distance.inf_t |
Definition at line 2001 of file find_distance.py.
| int find_distance.INFERENCEFPS = 15 |
Maximum inference framerate.
Caps how often the YOLO model is called.
Definition at line 141 of file find_distance.py.
| float find_distance.INTERSECT_ALLOWANCE = 0.15 |
Minimum fraction of a spot's mask area that a bounding box must overlap to mark the spot as occupied.
(0.15 = 15%)
Definition at line 179 of file find_distance.py.
| str find_distance.JETSON_CLOCKS_CONF = '/tmp/jetson_clocks_backup.conf' |
Temporary file path used to store the original jetson clock state before locking.
Definition at line 110 of file find_distance.py.
| find_distance.line = f.readline() |
Definition at line 1929 of file find_distance.py.
| find_distance.m = np.zeros((DISP_H, DISP_W), dtype=np.uint8) |
Definition at line 222 of file find_distance.py.
| int find_distance.MAX_BATCH = 3 |
Maximum batch size passed to the YOLO model per inference call.
Definition at line 156 of file find_distance.py.
| dict find_distance.MIN_BOX_H = {'Left': 30, 'Right': 30} |
Per camera minimum bounding box height in pixels.
Boxes shorter than this are ignored to filter out distant vehicles.
Definition at line 175 of file find_distance.py.
| str find_distance.MODEL_PATH = "./models/yolo11n.engine" |
Path to the YOLO model file.
Supports .pt (PyTorch) and .engine (TensorRT).
Definition at line 150 of file find_distance.py.
| find_distance.need_annotation = args.test or args.record |
Definition at line 1923 of file find_distance.py.
| tuple find_distance.OCCUPIED_COLOR = (0, 0, 220) |
BGR colour used to fill occupied spot overlays (red tint).
Definition at line 227 of file find_distance.py.
| find_distance.ok |
Definition at line 1947 of file find_distance.py.
| find_distance.p |
Definition at line 1969 of file find_distance.py.
| find_distance.port = float(re.search(r'\d+\.\d+', line).group()) |
Definition at line 1934 of file find_distance.py.
| find_distance.probe = cv2.VideoCapture(src) |
Definition at line 1946 of file find_distance.py.
| list find_distance.PROCESSES = [] |
List of capture worker Process objects, kept for clean shutdown.
Definition at line 271 of file find_distance.py.
| find_distance.raw = SharedMemory(create=True, size=CAP_H * CAP_W * 3) |
Definition at line 1953 of file find_distance.py.
| list find_distance.RAW_BUFS = [] |
NumPy arrays mapped onto RAW_SHM_OBJS for zero copy frame access.
Definition at line 255 of file find_distance.py.
| list find_distance.RAW_FRAME_ID = [] |
Per camera shared Value('i') counters incremented each time a new frame is written.
Used by the inference thread to detect stale frames without locking.
Definition at line 268 of file find_distance.py.
| list find_distance.RAW_LOCKS = [] |
Per-camera multiprocessing Locks protecting RAW_BUFS entries.
Definition at line 264 of file find_distance.py.
| list find_distance.RAW_SHM_OBJS = [] |
SharedMemory objects holding raw (unannotated) camera frames.
Definition at line 249 of file find_distance.py.
| find_distance.rec_t = Thread(target=record_loop, args=(STOP_EVENT, args.annotate), daemon=True) |
Definition at line 2014 of file find_distance.py.
| find_distance.roi_editor = ROIEditor() |
Definition at line 1042 of file find_distance.py.
| dict find_distance.ROIS |
Per camera list of parking spot definitions.
Definition at line 186 of file find_distance.py.
| list find_distance.SOURCES = ['./recordings/left_side_cam.mp4', './recordings/right_side_ccam.mp4'] |
Video sources for each camera.
Can be V4L2 device indices (int) or file paths (str).
Definition at line 135 of file find_distance.py.
| float find_distance.SPOT_ALPHA = 0.25 |
Opacity of the spot overlay blend.
0.0 = invisible, 1.0 = fully opaque.
Definition at line 233 of file find_distance.py.
| dict find_distance.SPOT_CAMS = {'Left', 'Right'} |
Set of camera names that have parking spot ROIs defined and should be checked for occupancy.
Definition at line 171 of file find_distance.py.
| dict find_distance.SPOT_MASK = {} |
Per camera list of boolean NumPy arrays (DISP_H x DISP_W) pre-rasterised from ROIS.
True pixels belong to that spot's polygon. Used for fast overlap checks.
Definition at line 195 of file find_distance.py.
| dict find_distance.spot_states |
Per-camera list of occupancy strings for each spot.
Values are 'Empty' or 'Car'. Written by check_parking_spots(), read by the Flask API and display loop.
Definition at line 243 of file find_distance.py.
| find_distance.STOP_EVENT = Event() |
Multiprocessing Event shared with all worker processes and threads.
Set to True to signal a clean shutdown across the entire system.
Definition at line 279 of file find_distance.py.
| find_distance.store_lock = threading.Lock() |
Threading lock protecting all shared state variables (spot_states, _last_boxes, etc.) from concurrent reads/writes by the inference and display threads.
Definition at line 238 of file find_distance.py.
| find_distance.target |
Definition at line 2008 of file find_distance.py.
| float find_distance.usb_1 = 2.1 |
Definition at line 1926 of file find_distance.py.
| float find_distance.usb_2 = 2.2 |
Definition at line 1927 of file find_distance.py.