Die Host-Seite ist jetzt auf eine gemeinsame software-first API ausgerichtet. In [control.rs](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/crates/infinity_host/src/control.rs>) steckt jetzt das stabile gemeinsame Modell für Snapshots, Commands, Pattern-Katalog, Presets, Gruppen, Parameter, Preview und Übergänge. Darauf sitzen die neue Szenen-/Pattern-Schicht in [scene.rs](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/crates/infinity_host/src/scene.rs>) und der simulationsbasierte Host-Service in [simulation.rs](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/crates/infinity_host/src/simulation.rs>).

Der neue Core kann jetzt softwareseitig schon:
- Pattern-Katalog mit `solid_color`, `gradient`, `chase`, `pulse`, `noise`, `walking_pixel`
- Preset-Recall, Gruppen-Targeting, Parameteränderungen und Übergänge
- simulierte Preview-Daten für alle 18 Outputs
- denselben API-Zugriff für CLI, Engineering-GUI und später Web-UI / grandMA-Adapter

Zusätzlich gibt es im Host-CLI jetzt `snapshot`, also eine direkte JSON-Sicht auf den gemeinsamen Host-State über [main.rs](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/crates/infinity_host/src/main.rs>).

**Oberflächen**
Die technische lokale GUI bleibt bestehen und hängt jetzt auf der neuen gemeinsamen API. In [app.rs](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/crates/infinity_host_ui/src/app.rs>) zeigt sie weiter Mapping/Status/Testmuster, ergänzt um Engine-/Szene-/Übergangsstatus. Sie bleibt bewusst Engineering-orientiert und ist nicht zur kreativen Hauptoberfläche aufgeblasen worden.

Die Beispielkonfiguration in [project.example.toml](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/config/project.example.toml>) ist jetzt auch als Software-Spielwiese brauchbarer: mehr Gruppen, mehr kreative Presets und bessere Basis für Look-Entwicklung ohne echte Node-Aktivierung. Die neue API-Ausrichtung ist in [host_api.md](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/docs/host_api.md>) und [architecture.md](</c:/Users/janni/Documents/RFP/Infinity_Vis _Rust/docs/architecture.md>) dokumentiert.

**Verifikation**
`cargo check` und `cargo test -q` laufen erfolgreich. Zusätzlich läuft `cargo run -p infinity_host -- snapshot --config config/project.example.toml` und liefert den gemeinsamen Host-Snapshot mit Katalog, aktiver Szene, Preview, Node- und Panelstatus.

Der nächste sinnvolle Schritt ist jetzt ein echter API-Adapter fuer die kommende Web-UI, also HTTP/WebSocket auf genau diesem Host-Core statt einer frontend-spezifischen Parallelarchitektur.
This commit is contained in:
2026-04-17 11:39:56 +02:00
parent dde35551be
commit 9457666fd6
15 changed files with 6371 additions and 384 deletions

View File

@@ -4,14 +4,25 @@
Build a live-capable LED control platform that keeps realtime output deterministic while letting operators change scenes, brightness, tests, and presets without UI jitter leaking into the hot path.
## Current Priority
The current delivery order is intentionally software-first:
1. host-core and shared API
2. scene, preset, group, parameter, transition, and simulation model
3. web UI as the primary creative surface
4. engineering GUI as the technical surface
5. external show-control adapters such as grandMA
6. hardware validation and real node activation later
## Layer Split
1. Control layer
- Operator workflow
- Presets and topology editing
- Monitoring and diagnostics
- Shared host API first
- Creative web UI later
- Engineering GUI already implemented in `crates/infinity_host_ui`
- Monitoring, mapping, diagnostics, and admin
- Never the timing master for LED output
- First vertical slice is implemented as `crates/infinity_host_ui`
2. Realtime engine
- Owns the monotonic clock
- Computes scene state, transitions, and dirty regions
@@ -35,6 +46,17 @@ Build a live-capable LED control platform that keeps realtime output determinist
Preview and telemetry are explicitly degradable. Realtime output is not.
## Shared Surface Model
Every surface must talk to the same host API:
- engineering GUI
- future creative web UI
- CLI inspection
- future grandMA adapter
The current software-first implementation uses a simulation-backed host API so looks, presets, parameters, and grouping can be developed before real node activation.
## Modes
### Distributed Scene Mode
@@ -79,7 +101,7 @@ The codebase deliberately blocks activation when these remain unresolved:
## Planned Next Steps
1. Expand the new UI slice from mock service to real host transport adapters
2. Implement UDP transport with separate control and realtime sockets
3. Connect firmware driver backends after hardware validation
4. Add deterministic effect registry shared between host planning and firmware capability negotiation
1. Add a network-facing adapter for the shared host API and start the web UI
2. Expand scene authoring and preset editing on top of the existing simulation core
3. Implement transport adapters without coupling them to any single frontend
4. Keep hardware activation behind explicit later validation gates

View File

@@ -11,12 +11,13 @@ Suggested commands:
```powershell
cargo test
cargo run -p infinity_host -- snapshot --config config/project.example.toml
cargo run -p infinity_host_ui
cargo run -p infinity_host -- validate --config config/project.example.toml --mode structural
cargo run -p infinity_host -- plan-boot-scene --config config/project.example.toml --preset-id safe_static_blue
```
The native UI currently runs against the host-core mock service so the operator workflow can be exercised before transport and firmware integration are complete.
The native engineering UI and the CLI snapshot currently run against the simulation-backed host API so looks, presets, grouping, and parameter flow can be exercised before transport and firmware integration are complete.
Before any live activation, run:

80
docs/host_api.md Normal file
View File

@@ -0,0 +1,80 @@
# Host API
## Purpose
The host API is the stable boundary that all operator surfaces and later external show-control adapters must use.
Current rule:
- no UI is allowed to become the realtime clock
- no frontend-specific assumptions are allowed to leak into scene simulation or transport planning
- future grandMA support must land as an adapter on this API, not as a special-case core path
## Current Implementation
The API lives in:
- `crates/infinity_host/src/control.rs`
- `crates/infinity_host/src/scene.rs`
- `crates/infinity_host/src/simulation.rs`
The engineering GUI already consumes this API through the `HostApiPort` / `HostUiPort` trait boundary.
## Snapshot Model
`HostSnapshot` currently exposes:
- system metadata
- global controls
- engine timing and transition state
- catalog of patterns, presets, and groups
- active scene with parameter values
- simulated preview panels
- node and panel status
- recent event log
This makes it suitable as the shared read model for:
- engineering GUI
- upcoming web UI
- CLI inspection
- later external control bridges
## Command Model
`HostCommand` currently supports:
- blackout
- master brightness
- pattern selection
- preset recall
- group selection
- scene parameter changes
- transition duration changes
- per-panel test triggers
## Simulation Layer
The current `SimulationHostService` is not a throwaway mock. It is the software-first runtime for:
- look exploration
- preset development
- parameter tuning
- future web UI integration
- API contract testing before hardware activation
It simulates:
- active scene state
- pattern rendering previews
- group gating
- transitions
- node connectivity status
- per-panel mapping tests
## Near-Term Direction
1. Keep extending this API instead of adding surface-specific data paths.
2. Add a network-facing adapter for the same API when the web UI starts.
3. Keep engineering GUI focused on topology, mapping, diagnostics, and admin.
4. Add grandMA later as an external show-control adapter against this API.