scheng SDK
A Rust SDK for building GPU-accelerated video instruments. Write shader-based processors, mixers, and generators with professional I/O — MIDI, Syphon, NDI, RTMP, webcam, video, and more.
5-Minute Quickstart
Clone a template and get rendering on the GPU immediately.
Graph System
How nodes connect and how execution topology is declared.
I/O Reference
Every input and output protocol with copy-paste code.
What is scheng?
scheng is a Rust engine and SDK for building programmable video instruments — not a fixed VJ application. It gives you the building blocks: a graph execution model, explicit shader contracts, and a modular I/O surface. You wire them into your own executable.
The engine is deliberately minimal: it evaluates frames, it does not own time, schedule execution, or manage devices. These responsibilities belong to the instrument — your application.
Engine responsibilities
Deterministic graph execution · Shader compilation and caching · GPU resource management · Render target allocation · Per-frame parameter binding
Instrument responsibilities
Graph construction · Time and transport · Control input (MIDI, OSC) · Device management · Output routing
5-Minute Quickstart
Use the scheng-gradient template — the simplest possible instrument. One shader, hot-reload, no I/O dependencies.
-
Clone the template
git clone https://github.com/yourusername/scheng-gradient cd scheng-gradient -
Place scheng/ next to it
projects/ scheng/ ← SDK workspace scheng-gradient/ ← your project (here) -
Run it
cargo run --releaseA window opens rendering a GPU shader. You should see an animated gradient.
-
Edit the shader live
Open
assets/shaders/main.fragin your editor. Save any change — the window updates instantly, no restart needed. This is hot-reload. -
Try 4K MSAA
cargo run --release -- --width 3840 --height 2160 --msaa 4
scheng-gradient → scheng-processor → scheng-mixer → scheng-video-mixer in increasing complexity.
Core Concepts
Six terms define the entire system. Learn these and everything else follows.
- Instrument
- Your application. It constructs the graph, supplies NodeConfigs and FrameCtx, and handles I/O. The engine does not define instruments — instruments define themselves around the engine.
- Graph
- A static, directed acyclic topology describing how nodes connect. The graph defines structure only — no GPU work happens here. It compiles into an immutable execution plan via
graph.compile(). - Node
- A single unit of computation. Nodes consume and produce textures. Behavior is determined by the GLSL shader source supplied in NodeConfig, not by the node kind itself.
- NodeConfig
- The per-node configuration surface for each frame. Contains shader source, uniform values, and optional external textures. Owned by the instrument, read by the runtime. Never mutated by the engine.
- FrameCtx
- The immutable per-frame context: width, height, time, frame index. The engine derives shader uniforms
uResolution,uTime, anduFramefrom this. The engine never owns or accumulates time. - ParamStore
- The live parameter hub. MIDI and OSC write targets; the smoother advances values toward targets each tick; shaders read smoothed values via NodeConfig.uniforms. Owned by the instrument, threadsafe.
Architecture
scheng is organized as five isolated layers. Each layer has a single responsibility. No layer reaches backward across a boundary. This isolation guarantees deterministic execution, clear debugging, and long-term SDK stability.
Key architectural principle
Topology is static. Configuration is dynamic. The graph structure (which nodes connect to which) is fixed after compile(). NodeConfig — shader source, uniforms, input textures — changes every frame. This separation is what makes execution deterministic and debuggable.
For identical: Graph + NodeConfig + FrameCtx → execution order and resource bindings are always identical. Shader determinism depends on the shader code itself.
Graph System
The graph is a directed acyclic topology of nodes. It has no GPU concept. It defines what connects to what. The runtime layer handles how and when.
Build a graph once at startup. Compile it once. Reuse the plan every frame. The only thing that changes per frame is NodeConfig.
use scheng_graph::{Graph, NodeKind};
// 1. Declare topology
let mut g = Graph::new();
let src = g.add_node(NodeKind::ShaderSource);
let fx = g.add_node(NodeKind::ShaderPass);
let out = g.add_node(NodeKind::PixelsOut);
// 2. Connect edges
g.connect_named(src, "out", fx, "in").unwrap();
g.connect_named(fx, "out", out, "in").unwrap();
// 3. Compile — validates topology, produces execution plan
let plan = g.compile().unwrap();
// 4. Execute — plan is reused every frame
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut sink)?;
Node Kinds
Node kinds describe structural roles, not behavior. Behavior is provided by the GLSL shader source in NodeConfig.frag_shader.
| NodeKind | Role | In ports | Out | Notes |
|---|---|---|---|---|
ShaderSource | Generator / source | iChannel0–3 (optional) | out | Entry point. No required upstream. |
ShaderPass | Transformer / effect | in | out | Processes one upstream texture. |
Crossfade | A/B mixer | a, b | out | Blends two sources via u_tbar. |
PreviousFrame | Temporal feedback | in | out | Reads last frame's output. Feedback loops. |
PixelsOut | Terminal / sink trigger | in | — | Triggers all registered output sinks. |
Connections & Ports
Ports are named. connect_named(from_node, "out", to_node, "in") connects them. Port names map to iChannel bindings in the shader:
| Port name(s) | Maps to |
|---|---|
in · in0 · a · src | iChannel0 |
in1 · b · src1 | iChannel1 |
in2 | iChannel2 |
out | Node's render target |
Compilation
graph.compile() performs cycle detection, port validation, and deterministic topological sorting. It returns an immutable ExecutionPlan that can be safely reused across frames. Structural changes to the graph require recompilation.
NodeConfig
The complete per-node configuration surface for one frame. Owned by the instrument, read by the runtime, never mutated by the engine.
pub struct NodeConfig {
pub frag_shader: Option<String>, // GLSL source, None = builtin gradient
pub uniforms: HashMap<String, f32>, // u_* values → CustomBlock
pub output_name: Option<String>, // named output for external routing
pub input_textures: [Option<Arc<wgpu::Texture>>; 4], // external textures (webcam etc.)
}
Changes to NodeConfig take effect on the next frame. Changing frag_shader triggers recompilation on that frame (lazy, cached per hash).
Shader Contract
Shaders are written in GLSL 330. The compat layer processes them before GPU compilation: injects the standard uniform block, rewrites iChannel sampler references, and packs custom u_* uniforms into a GPU buffer.
You never write layout qualifiers, binding numbers, or sampler declarations. The contract handles it.
Built-in Uniforms
Always available in every shader, no declaration needed:
| Uniform | Type | Description |
|---|---|---|
uTime | float | Seconds since instrument start (from FrameCtx.time) |
uFrame | uint | Monotonic frame counter (from FrameCtx.frame) |
uResolution | vec2 | Output width and height in pixels |
iChannel0 | sampler2D | First upstream texture (or external texture) |
iChannel1 | sampler2D | Second upstream texture |
iChannel2 | sampler2D | Third upstream texture |
iChannel3 | sampler2D | Fourth upstream texture |
v_uv | vec2 (in) | Interpolated UV coordinates [0,1] |
fragColor | vec4 (out) | Output pixel color — write this |
Minimal shader
// assets/shaders/main.frag
void main() {
vec2 uv = v_uv;
float t = uTime;
vec3 col = 0.5 + 0.5 * cos(t + uv.xyx + vec3(0,2,4));
fragColor = vec4(col, 1.0);
}
Custom Uniforms
Declare any uniform float u_name; in your shader. The compat layer detects it, strips the declaration, and injects a CustomBlock GLSL uniform block at GPU binding 6. The values come from NodeConfig.uniforms.
// In shader — declare custom uniforms freely:
uniform float u_threshold; // solarize threshold
uniform float u_mix; // wet/dry blend
void main() {
vec4 src = texture(iChannel0, v_uv);
vec3 inv = 1.0 - src.rgb;
vec3 solar = mix(src.rgb, inv, step(u_threshold, src.rgb));
fragColor = vec4(mix(src.rgb, solar, u_mix), 1.0);
}
// In Rust — supply values via NodeConfig.uniforms:
let mut cfg = NodeConfig::default();
cfg.frag_shader = shader_source.clone();
cfg.uniforms.insert("u_threshold".into(), 0.5);
cfg.uniforms.insert("u_mix".into(), 1.0);
Hot-Reload
Use AssetWatcher to watch your shader directory. When any file changes, drain the watcher, reload the source, and assign it to NodeConfig.frag_shader. The pipeline recompiles lazily on the next execute_frame().
use scheng_hotreload::watcher::AssetWatcher;
let mut watcher = AssetWatcher::new("assets").unwrap();
// In tick():
if !watcher.drain().is_empty() {
self.shader_src = std::fs::read_to_string("assets/shaders/main.frag").ok();
// No pipeline.clear() needed — hash mismatch triggers recompile automatically
log::info!("Shader reloaded");
}
ParamStore
ParamStore is the central parameter state machine. MIDI and OSC threads write targets. Each tick, step_frame() advances smoothed values toward targets. Shaders read smoothed values through NodeConfig.uniforms.
Defining a schema
use scheng_param_store::{ParamStore, ParamSchema, schema::ParamDef};
let store = ParamStore::new(ParamSchema {
version: 1,
params: vec![
ParamDef {
name: "u_tbar".into(),
ty: "float".into(),
min: 0.0, max: 1.0, default: 0.0,
smooth: 0.05, // 0=instant, higher=slower
midi_cc: Some(1), // CC1 → u_tbar
midi_channel: None, // None = any channel
osc_addr: Some("/scheng/tbar".into()),
node_label: None,
description: None,
},
],
});
Reading values each tick
// ALWAYS call step_frame() first — this is the smoother
let uniforms = {
let mut store = param_store.lock().unwrap();
store.step_frame(); // ← REQUIRED. Without this, MIDI does nothing visible.
store.all_values().clone() // HashMap<String, f32> — all params
};
// Assign to NodeConfig — params reach shader automatically
cfg.uniforms = uniforms;
MIDI Input
Use midir directly — it gives the most control and matches the shadecore pattern. Open in resumed(), store the live connection in your instrument struct.
use midir::{MidiInput, Ignore};
// In resumed():
let mut midi = MidiInput::new("my-instrument").ok()?;
midi.ignore(Ignore::None);
let ports = midi.ports();
for p in &ports {
log::info!("MIDI port: '{}'", midi.port_name(p).unwrap_or_default());
}
let port = ports.into_iter().next()?;
let name = midi.port_name(&port).unwrap_or_default();
let conn = midi.connect(&port, "midi-in", move |_ts, msg, _| {
if msg.len() == 3 && (msg[0] & 0xF0) == 0xB0 {
// CC message: channel = msg[0] & 0x0F, cc = msg[1], val = msg[2]
log::info!("[MIDI] CC{} = {}", msg[1], msg[2]);
store_clone.lock().unwrap().set_by_midi_cc(msg[1], msg[2]).ok();
}
}, ()).ok()?;
log::info!("MIDI connected: '{}'", name);
OSC Input
use scheng_control_osc_wgpu::OscReceiver;
let osc = OscReceiver::bind("127.0.0.1:9000").unwrap();
// In tick() — drains any pending OSC messages into the store:
osc.poll(&mut *store.lock().unwrap());
OSC address format: /scheng/u_param_name, or whatever you set in ParamDef.osc_addr.
Smoothing Reference
| smooth | Behavior | Frames to target | Best for |
|---|---|---|---|
0.0 | Instant — target = value immediately | 1 | Buttons, gates, triggers |
0.05 | Fast | ~20 | Faders, most CC controls |
0.10 | Medium | ~40 | Gradual parameter shifts |
0.15 | Slow | ~60 | Ambient color, atmosphere |
0.50 | Very slow drift | ~200+ | Auto-evolving values |
The formula is: value = value + (target - value) * (1.0 - smooth) per frame at the target framerate.
Webcam Input
use scheng_input_webcam::Webcam;
// List available cameras
let cams = Webcam::list_cameras();
for (i, name) in cams.iter().enumerate() {
log::info!("Camera [{}]: {}", i, name);
}
// Open — nokhwa scales to render resolution internally
let cam = Webcam::open(1, width, height, &device, &queue)
.map(|c| { log::info!("Webcam {}×{}", c.width(), c.height()); c })?;
// Each tick — poll new frame, inject as iChannel0
cam.poll(&queue);
cfg.input_textures[0] = cam.texture_arc();
cfg.frag_shader = Some("void main() {
fragColor = texture(iChannel0, vec2(v_uv.x, 1.0 - v_uv.y));
}".into());
vec2(v_uv.x, 1.0 - v_uv.y) when sampling webcam textures, or the image will appear upside down.
Run with --list-cameras to discover indexes. On macOS, camera index 0 is often OBS Virtual Camera; FaceTime HD is typically index 1.
Video File Input
use scheng_input_video::VideoDecoder;
let mut vid = VideoDecoder::open("clip.mp4", &device, &queue)?;
log::info!("{}×{} @ {:.1}fps", vid.width(), vid.height(), vid.fps());
// Each tick — upload frame at current time (loops automatically)
vid.upload_frame(time_secs, &queue);
cfg.input_textures[0] = vid.texture_arc();
cfg.frag_shader = Some(passthrough_yflip.into());
Supported: any container ffmpeg can decode — mp4, mov, avi, mkv, ProRes, H.264, H.265. BICUBIC scaling. Loops at end of file.
Syphon Receive macOS
use scheng_input_syphon::SyphonReceiver;
// Deferred init — call from frame 5+ (run loop must be active)
if self.frame >= 5 && !self.syphon_initialized {
self.syphon_initialized = true;
let servers = SyphonReceiver::list_servers(mtl_ptr);
for s in &servers { log::info!("{} from {}", s.name, s.app); }
self.recv = SyphonReceiver::connect("OBS", mtl_ptr, &device, &queue).ok();
}
// Each tick:
if let Some(ref mut recv) = self.recv {
recv.poll_with_device(&device, &queue);
cfg.input_textures[0] = recv.texture_arc();
}
resumed() — before the first about_to_wait() fires — yields zero results. Always defer to frame 5 or later.
NDI Receive
use scheng_input_ndi::NdiReceiver;
let sources = NdiReceiver::find_sources(2000)?;
let mut recv = NdiReceiver::connect(&sources[0].name, &device, &queue)?;
// Each tick:
if recv.poll(&device, &queue) {
cfg.input_textures[0] = recv.texture_arc();
}
RTMP / RTSP Receive
use scheng_input_rtmp::RtmpReceiver;
let mut recv = RtmpReceiver::open(
"rtmp://localhost:1935/live/key",
width, height, &device, &queue
)?;
// Each tick:
recv.poll(&device, &queue);
cfg.input_textures[0] = recv.texture_arc();
Any URL ffmpeg can read: rtmp://, rtsp://, srt://, http:// (HLS). The receiver spawns ffmpeg as a subprocess.
Output Sinks
All outputs implement the OutputSink trait. Pass any sink to execute_frame(). Multiple sinks can be active simultaneously — call execute_frame() once per sink per frame.
// Multiple simultaneous outputs
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut preview)?;
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut syphon_sink)?;
execute_frame() call re-executes the render graph. For Syphon (zero-copy Metal texture sharing), this is effectively free. For FFmpeg (GPU readback), it adds a frame readback cost. Minimize FFmpeg sinks at high framerates.
Preview Window
A PreviewSink blits the render target to the winit window via a WGSL passthrough shader. Resize-aware — updates surface config on WindowEvent::Resized. See any template for the complete implementation pattern.
Syphon Output macOS
use scheng_output_syphon::SyphonSink;
let mut sink = SyphonSink::new("my-instrument")?;
// Now visible to Resolume, OBS, VDMX as "my-instrument"
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut sink)?;
Zero-copy Metal texture sharing on Apple Silicon. The GPU texture is shared directly with the receiving application — no CPU readback, no copy.
NDI Output
use scheng_output_ndi::{NdiSink, NdiConfig};
let mut sink = NdiSink::new(NdiConfig {
source_name: "my-instrument".into(),
framerate_num: 30,
framerate_den: 1,
..Default::default()
})?;
FFmpeg Output (RTMP / RTSP / File)
use scheng_output_ffmpeg::{FfmpegSink, FfmpegConfig, config::OutputTarget};
// RTMP stream → YouTube / Twitch / OBS ingest
let mut sink = FfmpegSink::new(FfmpegConfig {
width: 1280, height: 720, framerate: 30,
target: OutputTarget::Rtmp { url: "rtmp://a.rtmp.youtube.com/live2/STREAM_KEY".into() },
..Default::default()
})?;
// RTSP (requires mediamtx or similar)
let mut sink = FfmpegSink::new(FfmpegConfig {
target: OutputTarget::Rtsp { url: "rtsp://localhost:8554/live".into() },
..Default::default()
})?;
// File recording — finalizes cleanly on instrument exit
let mut sink = FfmpegSink::new(FfmpegConfig {
target: OutputTarget::File { path: "output.mp4".into(), overwrite: true },
..Default::default()
})?;
All FFmpeg output is automatically tagged with bt.709 colorspace metadata. See Colorspace.
Local RTMP/RTSP testing with mediamtx
brew install mediamtx
# Config: accept any stream key
printf "paths:\n all_others:\n" > /tmp/mtx.yml
mediamtx /tmp/mtx.yml
# Receive in VLC
vlc rtmp://localhost:1935/live/key
FrameCtx
The immutable per-frame execution context. The engine derives all temporal shader uniforms from this structure. The engine never owns or accumulates time — all temporal semantics originate here.
pub struct FrameCtx {
pub width: u32, // render width → uResolution.x
pub height: u32, // render height → uResolution.y
pub time: f32, // seconds → uTime (you control this)
pub frame: u64, // counter → uFrame (you increment this)
pub sample_count: u32, // 1=off, 4=4xMSAA, 8=8xMSAA
}
Valid time patterns: monotonic wall-clock, manual scrub, looping domains, discontinuous jumps. The runtime does not interpret or constrain time — it forwards it to shaders verbatim.
Changing width or height between frames triggers render target reallocation (GPU memory only, fast). Excessive resolution changes can cause frame drops.
Per-Frame Lifecycle
Every frame proceeds in exactly this order. The hot path avoids allocation, shader recompilation, and blocking I/O.
- Poll inputs
Webcam.poll(), VideoDecoder.upload_frame(), Syphon.poll_with_device() — upload latest frame pixels to GPU textures on dedicated threads. - Step params
store.step_frame() — advance smoother: targets → values. MIDI/OSC threads write targets; this step moves values. - Build NodeConfigs
Assemble HashMap<NodeId, NodeConfig> — shader source, uniforms from ParamStore, input textures from devices. - graph.compile() (if needed)
Only required on topology change. For most instruments this is called once at startup and the plan is reused. - execute_frame()
One wgpu RenderPass per non-output node, in topological order. All passes encoded into a single CommandEncoder. - queue.submit()
All GPU work submitted as one batch. Output sinks are presented after submit — never before. - Present output sinks
sink.present() called for each registered sink: preview window blit, Syphon texture share, NDI frame push, FFmpeg pipe write.
Determinism
For identical Graph + NodeConfig + FrameCtx: execution order, resource bindings, and shader inputs are always identical. This makes scheng instruments replayable and debuggable offline.
The graph guarantees structural determinism at compile time. The runtime enforces execution determinism. Shader determinism depends on the shader itself — avoid sampling gl_FragCoord without normalizing, and avoid platform-specific GLSL extensions.
Threading Model
GPU render thread
Single threaded. Owns the wgpu Device and Queue. Runs the winit event loop. Calls execute_frame() and present().
Input/Output threads
MIDI callback, OSC UDP receiver, video decode, RTMP receive, NDI receive all run on dedicated threads. Communicate via Arc<Mutex<ParamStore>> or bounded channels.
The bounded channel pattern (try_send semantics) ensures the GPU render thread never blocks waiting for an input. When a source falls behind, frames are dropped gracefully.
Rgba16Float Pipeline
All internal render targets use Rgba16Float — 16-bit half-float precision per channel. This gives 65,536 distinct values per channel instead of 256 (8-bit), eliminating quantization banding that would otherwise accumulate across multiple shader passes.
| Stage | Format | Why |
|---|---|---|
| Internal render targets | Rgba16Float | 16-bit — zero banding across passes |
| Webcam input | Rgba8Unorm | Camera hardware delivers 8-bit |
| Video input | Rgba8Unorm | ffmpeg decodes to 8-bit RGBA |
| NDI input | Rgba8Unorm | NDI SDK delivers 8-bit |
| Syphon input | Rgba8Unorm | BGRA→RGBA swap in ObjC bridge |
| FFmpeg output | YUV420P | Broadcast standard; f16→u8 in readback |
| Preview display | sRGB surface | Auto-selected by wgpu for display |
bt.709 Colorspace
All FFmpeg output is automatically tagged with Rec.709 metadata. Without these flags, players and broadcast systems default to bt.601 (SD standard), causing visibly shifted colors on HD content.
// Automatically applied to all FfmpegSink output:
-colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range tv
MSAA Anti-Aliasing
Set sample_count in FrameCtx. The pipeline is keyed on (shader_hash, sample_count) — switching between values recompiles the pipeline once, then caches.
| sample_count | Quality | Memory cost | Best for |
|---|---|---|---|
1 | Off | Baseline | Organic / fluid shaders |
4 | 4× MSAA | ~2× | Geometric content, hard edges |
8 | 8× MSAA | ~4× | Highest quality, sufficient GPU budget |
4K / Resolution
Resolution is a FrameCtx parameter — change it between frames for dynamic reallocation. Render targets are allocated on demand at the first execute_frame() for a given size.
cargo run --release -- --width 3840 --height 2160 --msaa 4
Crate Map
This is a system ownership map, not an API reference. Core crates change slowly. Edge crates (I/O) evolve independently without breaking core guarantees.
Core
| Crate | Owns | Does NOT own |
|---|---|---|
scheng-graph | Node instantiation, port definitions, topology validation, compile() | Shader compilation, GPU resources, execution |
scheng-runtime-wgpu | WgpuRuntime, execute_frame, render targets, pipeline cache, CustomBlock | Topology mutation, control protocols, time |
scheng-param-store | ParamStore, ParamSchema, smoothing, MIDI/OSC routing tables | Protocol parsing, device management |
scheng-hotreload | AssetWatcher, file change events | Shader compilation (that's the runtime) |
Inputs
| Crate | Description |
|---|---|
scheng-input-midi | MidiInput wrapper around midir (prefer using midir directly) |
scheng-control-osc-wgpu | OscReceiver, UDP socket, address → ParamStore routing |
scheng-input-webcam | Webcam open/poll, nokhwa, MJPEG/YUYV decode, GPU texture upload |
scheng-input-video | VideoDecoder, ffmpeg-next, BICUBIC scaling, looping, texture_arc() |
scheng-input-ndi | NdiReceiver, grafton-ndi, NewTek SDK |
scheng-input-syphon | SyphonReceiver, ObjC Metal bridge, SyphonServerDirectory (macOS) |
scheng-input-rtmp | RtmpReceiver, ffmpeg subprocess, bounded channel, any ffmpeg-readable URL |
Outputs
| Crate | Description |
|---|---|
scheng-output-syphon | SyphonSink, Metal texture sharing, zero-copy (macOS) |
scheng-output-ndi | NdiSink, NewTek SDK, frame push |
scheng-output-spout | SpoutSink stub, DX texture sharing (Windows) — coming soon, C++ bridge in progress |
scheng-output-ffmpeg | FfmpegSink, RTMP/RTSP/file, H.264, bt.709, ffmpeg subprocess |
CLI Flags
| Flag | Default | Description |
|---|---|---|
--width N | 1280 | Render width in pixels |
--height N | 720 | Render height in pixels |
--msaa N | 1 | MSAA sample count — 1, 4, or 8 |
--webcam N | — | Open camera at index N (0-based) |
--list-cameras | — | Print available cameras and exit |
--video path | — | Open video file for looping playback |
--video-a path | — | Video file for channel A (video-mixer) |
--video-b path | — | Video file for channel B (video-mixer) |
--syphon-a name | — | Syphon receive source name, channel A |
--syphon-b name | — | Syphon receive source name, channel B |
--stream url | — | RTMP or RTSP output URL |
--record path | — | Record to file (H.264 mp4) |
Known Constraints & Non-Goals
Understanding what the engine deliberately does not do is as important as understanding what it does. These are intentional architectural constraints — not bugs.
The engine does not own time
No play/pause, no internal frame clock, no transport logic. All temporal semantics are supplied via FrameCtx. This enables replayability and offline rendering.
The engine is protocol-agnostic
No OSC parsing, no MIDI handling, no network sockets in core crates. Control values enter only via NodeConfig.uniforms. Parsing belongs in the instrument.
Device management is external
No webcam enumeration, no capture APIs in the engine. The engine consumes textures (Arc<wgpu::Texture>). It does not manage hardware.
Node kinds must remain minimal
Prefer extending behavior through new shaders or NodeConfig parameters. Avoid adding node kinds unless a new structural execution role is genuinely required.
Testing
# All workspace tests — headless GPU, no window or display required
cargo test --workspace --exclude scheng-example-instrument
# Single crate
cargo test -p scheng-runtime-wgpu
# With stdout
cargo test -p scheng-runtime-wgpu --test headless -- --nocapture
# Shader library tests
cargo test -p scheng-runtime-wgpu --test shader_library
Tests use wgpu's headless Metal/Vulkan backend — no display server needed. For CI without a physical GPU, set WGPU_BACKEND=gl.
| Crate | Test type | What it covers |
|---|---|---|
scheng-runtime-wgpu | GPU integration | Render, uniforms, feedback, custom u_* params |
scheng-runtime-wgpu | Shader library | Crossfade, keyer, wipe transitions, solarize |
scheng-graph | Unit | Graph compile, edge validation, cycle detection |
scheng-param-store | Unit | Schema, MIDI CC mapping, smoothing arithmetic |
scheng-output-ffmpeg | Integration | Config validation, ffmpeg args, file recording |
Troubleshooting
Library not loaded: Syphon.framework
rpath is missing. Verify build.rs has the rpath entries and that scheng/vendor/Syphon.framework exists. Never set DYLD_FRAMEWORK_PATH — rpath makes it unnecessary.
MIDI arrives but shader doesn't change
Almost always step_frame() is missing. MIDI writes to targets. The smoother only advances values when step_frame() is called. Both are required every tick.
store.step_frame(); // ← without this, store.get() always returns the default
Custom uniforms are always 0.0
Either (a) the value isn't in NodeConfig.uniforms, or (b) step_frame() isn't called. Check both.
cfg.uniforms.insert("u_threshold".into(), 0.5); // must exist
Webcam is upside down
Missing Y-flip. Always sample webcam textures with inverted V: vec2(v_uv.x, 1.0 - v_uv.y).
Webcam: no compatible format
Camera doesn't support MJPEG at requested resolution. Use --list-cameras to find valid indexes. Try FaceTime HD (usually index 1 on macOS) at 1280×720.
Syphon shows zero sources
Run loop not active yet. Defer SyphonReceiver init to frame 5+. See Syphon Receive.
RTMP: Connection refused
No RTMP server running. Start mediamtx with paths: all_others: config. See FFmpeg Output.
All pixels black
Output sinks must be called after queue.submit(). This is handled automatically by execute_frame() — don't call present() manually before submitting the command encoder.
Shader compile error on startup
Check the log for the GLSL error text. Common causes: GLSL 4.x features not supported by naga (use 3.3 features only), missing void main(), or undeclared variable. The error text includes line numbers relative to the user shader body.
Glossary
Canonical terminology. When these terms appear in SDK docs, they carry these precise meanings.
- Instrument
- A user-defined application built on the scheng engine. Owns graph construction, time, control input, device management, and output routing. The engine does not define instruments.
- Engine
- The collective core crates: scheng-graph, scheng-runtime-wgpu, scheng-param-store. Deterministic, execution-focused. Does not own time, control protocols, or devices.
- Graph
- The static, directed acyclic topology describing how nodes connect. Defines structure only — no GPU work. Compiled once into an ExecutionPlan.
- ExecutionPlan
- An immutable, topologically sorted plan produced by graph.compile(). Safe for repeated per-frame evaluation. Cannot be modified once compiled.
- Node
- A single unit of computation. Consumes and produces textures. Behavior is determined by GLSL shader source — node kind describes structural role only.
- NodeId
- An opaque identifier assigned by the Graph when add_node() is called. Stable for the lifetime of a graph instance. Used as key into NodeConfig maps.
- NodeConfig
- The complete per-node configuration surface for one frame: shader source, uniforms HashMap, input textures. Owned by the instrument, read by the runtime. Replaces the prior NodeProps concept.
- FrameCtx
- The immutable per-frame context: width, height, time, frame, sample_count. All temporal shader uniforms (uTime, uFrame, uResolution) derive from here. The engine never generates or mutates FrameCtx.
- ParamStore
- The live parameter state machine. MIDI/OSC write targets; step_frame() advances smoothed values toward targets; shaders read smoothed values via NodeConfig.uniforms.
- Shader Contract
- The explicit interface between runtime and GLSL programs: what the engine guarantees to inject (standard uniforms, iChannel bindings, CustomBlock) and what the shader must provide (void main(), valid fragColor output).
- CustomBlock
- A GLSL uniform block at GPU binding 6 containing all custom u_* uniform values, packed in declaration order and sourced from NodeConfig.uniforms.
- OutputSink
- A trait implementing present() — called after queue.submit() each frame. Implementations: PreviewSink, SyphonSink, NdiSink, FfmpegSink.
- Hot Path
- The per-frame execution path inside execute_frame(). Must avoid per-frame allocation, shader recompilation, and blocking I/O.
- Lazy Compilation
- Shader programs are compiled only on first use and cached per (shader_hash, sample_count). Recompilation triggers only when shader source changes.
- Topology vs Configuration
- Topology (Graph) is static after compile(). Configuration (NodeConfig) is dynamic and changes every frame. Never conflate them.