Files
Infinity_Vis_Rust/docs/architecture.md

3.9 KiB

Architecture

Goal

Build a live-capable LED control platform that keeps realtime output deterministic while letting operators change scenes, brightness, tests, and presets without UI jitter leaking into the hot path.

Current Priority

The current delivery order is intentionally software-first:

  1. host-core and shared API
  2. scene, preset, group, parameter, transition, and simulation model
  3. web UI as the primary creative surface
  4. engineering GUI as the technical surface
  5. external show-control adapters such as grandMA
  6. hardware validation and real node activation later

Layer Split

  1. Control layer
    • Shared host API first
    • Creative web UI later
    • Engineering GUI already implemented in crates/infinity_host_ui
    • Monitoring, mapping, diagnostics, and admin
    • Never the timing master for LED output
  2. Realtime engine
    • Owns the monotonic clock
    • Computes scene state, transitions, and dirty regions
    • Produces transport-ready commands or pixel frames
  3. Transport and node layer
    • Discovery, heartbeat, config sync, sequencing, and recovery
    • Control protocol and realtime protocol stay separate
    • Latest realtime state wins, stale frames may be dropped
  4. ESP32 firmware
    • Receives commands
    • Maintains local buffers
    • Drives three independent outputs per node
    • Handles watchdog and reconnect logic locally

Runtime Model

  • Logic tick target: 120 Hz
  • Frame synthesis target: 60 Hz
  • Network send target: 40-60 Hz, profile dependent
  • Preview target: 10-15 Hz

Preview and telemetry are explicitly degradable. Realtime output is not.

Shared Surface Model

Every surface must talk to the same host API:

  • engineering GUI
  • future creative web UI
  • CLI inspection
  • future grandMA adapter

The host core now also carries a runtime show store and persistence layer for:

  • saved presets
  • runtime user groups
  • active scene state
  • creative snapshots and variants

The current software-first implementation uses a simulation-backed host API so looks, presets, parameters, and grouping can be developed before real node activation.

Modes

Distributed Scene Mode

  • Default operating mode
  • Host sends scene parameters, time basis, seed, palette, and transitions
  • Nodes render locally for low bandwidth and better resilience

Frame Streaming Mode

  • Used for mapping tests, debugging, and effects that cannot run node-local
  • Host sends explicit output frames
  • Kept logically separate so it does not contaminate the primary scene pipeline

Mapping Model

The project configuration separates mapping into three layers:

  1. Hardware mapping
    • Node ID
    • Top, middle, bottom output
    • Physical output label
    • Driver channel reference
    • LED count, direction, color order, enable flag
  2. Layout mapping
    • Optional row and column placement
    • Optional preview transforms only
  3. Group mapping
    • Explicit groups for artistic control and fast operator access

The current example config intentionally keeps layout mapping empty because the old XML is only a spatial reference and the final node-to-room placement must still be confirmed on real hardware.

Validation Gates

The codebase deliberately blocks activation when these remain unresolved:

  • UART 6, UART 5, UART 4 still marked as pending_validation
  • output validation state is not validated
  • LED count deviates from 106
  • node outputs are missing top, middle, or bottom
  • driver references are ambiguous or duplicated per node

Planned Next Steps

  1. Expand creative authoring on top of the now-versioned host API and web UI
  2. Keep the engineering GUI focused on mapping, diagnostics, topology, and admin
  3. Implement transport adapters without coupling them to any single frontend
  4. Add future external show-control bridges such as grandMA on the same API boundary and generic adapter interface
  5. Keep hardware activation behind explicit later validation gates