Developer Reference Home Diagrams Quick Ref
Developer Reference

scheng SDK

A Rust SDK for building GPU-accelerated video instruments. Write shader-based processors, mixers, and generators with professional I/O — MIDI, Syphon, NDI, RTMP, webcam, video, and more.

5-Minute Quickstart

Clone a template and get rendering on the GPU immediately.

🔷

Graph System

How nodes connect and how execution topology is declared.

📡

I/O Reference

Every input and output protocol with copy-paste code.

What is scheng?

scheng is a Rust engine and SDK for building programmable video instruments — not a fixed VJ application. It gives you the building blocks: a graph execution model, explicit shader contracts, and a modular I/O surface. You wire them into your own executable.

The engine is deliberately minimal: it evaluates frames, it does not own time, schedule execution, or manage devices. These responsibilities belong to the instrument — your application.

Engine responsibilities

Deterministic graph execution · Shader compilation and caching · GPU resource management · Render target allocation · Per-frame parameter binding

Instrument responsibilities

Graph construction · Time and transport · Control input (MIDI, OSC) · Device management · Output routing


5-Minute Quickstart

Use the scheng-gradient template — the simplest possible instrument. One shader, hot-reload, no I/O dependencies.

  1. Clone the template
    git clone https://github.com/yourusername/scheng-gradient
    cd scheng-gradient
  2. Place scheng/ next to it
    projects/
      scheng/            ← SDK workspace
      scheng-gradient/   ← your project (here)
  3. Run it
    cargo run --release

    A window opens rendering a GPU shader. You should see an animated gradient.

  4. Edit the shader live

    Open assets/shaders/main.frag in your editor. Save any change — the window updates instantly, no restart needed. This is hot-reload.

  5. Try 4K MSAA
    cargo run --release -- --width 3840 --height 2160 --msaa 4
TIP — Start from a template All four templates are ready-to-run instruments demonstrating complete signal chains. scheng-gradientscheng-processorscheng-mixerscheng-video-mixer in increasing complexity.

Core Concepts

Six terms define the entire system. Learn these and everything else follows.

Instrument
Your application. It constructs the graph, supplies NodeConfigs and FrameCtx, and handles I/O. The engine does not define instruments — instruments define themselves around the engine.
Graph
A static, directed acyclic topology describing how nodes connect. The graph defines structure only — no GPU work happens here. It compiles into an immutable execution plan via graph.compile().
Node
A single unit of computation. Nodes consume and produce textures. Behavior is determined by the GLSL shader source supplied in NodeConfig, not by the node kind itself.
NodeConfig
The per-node configuration surface for each frame. Contains shader source, uniform values, and optional external textures. Owned by the instrument, read by the runtime. Never mutated by the engine.
FrameCtx
The immutable per-frame context: width, height, time, frame index. The engine derives shader uniforms uResolution, uTime, and uFrame from this. The engine never owns or accumulates time.
ParamStore
The live parameter hub. MIDI and OSC write targets; the smoother advances values toward targets each tick; shaders read smoothed values via NodeConfig.uniforms. Owned by the instrument, threadsafe.

Architecture

scheng is organized as five isolated layers. Each layer has a single responsibility. No layer reaches backward across a boundary. This isolation guarantees deterministic execution, clear debugging, and long-term SDK stability.

01 Graph Layer Declarative signal topology. Defines what connects to what. No GPU work.
02 Runtime Layer Frame planning and deterministic execution. Evaluates compiled plans.
03 Shader Contracts Explicit GPU interface. What a node promises — inputs, outputs, uniforms.
04 Control Inputs MIDI, OSC, keyboard. Updates parameters without touching graph structure.
05 Output Targets Sinks — Syphon, NDI, RTMP, preview. Consume final results, produce nothing upstream.

Key architectural principle

Topology is static. Configuration is dynamic. The graph structure (which nodes connect to which) is fixed after compile(). NodeConfig — shader source, uniforms, input textures — changes every frame. This separation is what makes execution deterministic and debuggable.

For identical: Graph + NodeConfig + FrameCtx → execution order and resource bindings are always identical. Shader determinism depends on the shader code itself.

Instrument graph + config Graph Layer compile() → plan Runtime execute_frame() GPU Backend wgpu Metal/DX12 Output Sinks Syphon · NDI · RTMP MIDI · OSC ↑ ParamStore ↑ NodeConfig.uniforms — injected each frame

Graph System

The graph is a directed acyclic topology of nodes. It has no GPU concept. It defines what connects to what. The runtime layer handles how and when.

Build a graph once at startup. Compile it once. Reuse the plan every frame. The only thing that changes per frame is NodeConfig.

use scheng_graph::{Graph, NodeKind};

// 1. Declare topology
let mut g = Graph::new();
let src  = g.add_node(NodeKind::ShaderSource);
let fx   = g.add_node(NodeKind::ShaderPass);
let out  = g.add_node(NodeKind::PixelsOut);

// 2. Connect edges
g.connect_named(src, "out", fx,  "in").unwrap();
g.connect_named(fx,  "out", out, "in").unwrap();

// 3. Compile — validates topology, produces execution plan
let plan = g.compile().unwrap();

// 4. Execute — plan is reused every frame
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut sink)?;

Node Kinds

Node kinds describe structural roles, not behavior. Behavior is provided by the GLSL shader source in NodeConfig.frag_shader.

NodeKindRoleIn portsOutNotes
ShaderSourceGenerator / sourceiChannel0–3 (optional)outEntry point. No required upstream.
ShaderPassTransformer / effectinoutProcesses one upstream texture.
CrossfadeA/B mixera, boutBlends two sources via u_tbar.
PreviousFrameTemporal feedbackinoutReads last frame's output. Feedback loops.
PixelsOutTerminal / sink triggerinTriggers all registered output sinks.
DESIGN RULE Avoid introducing new node kinds. Prefer extending behavior through new shader sources or additional NodeConfig parameters. Node kinds should remain minimal — they define structural roles only.

Connections & Ports

Ports are named. connect_named(from_node, "out", to_node, "in") connects them. Port names map to iChannel bindings in the shader:

Port name(s)Maps to
in · in0 · a · srciChannel0
in1 · b · src1iChannel1
in2iChannel2
outNode's render target

Compilation

graph.compile() performs cycle detection, port validation, and deterministic topological sorting. It returns an immutable ExecutionPlan that can be safely reused across frames. Structural changes to the graph require recompilation.

NodeConfig

The complete per-node configuration surface for one frame. Owned by the instrument, read by the runtime, never mutated by the engine.

pub struct NodeConfig {
    pub frag_shader:    Option<String>,              // GLSL source, None = builtin gradient
    pub uniforms:       HashMap<String, f32>,         // u_* values → CustomBlock
    pub output_name:    Option<String>,              // named output for external routing
    pub input_textures: [Option<Arc<wgpu::Texture>>; 4], // external textures (webcam etc.)
}

Changes to NodeConfig take effect on the next frame. Changing frag_shader triggers recompilation on that frame (lazy, cached per hash).


Shader Contract

Shaders are written in GLSL 330. The compat layer processes them before GPU compilation: injects the standard uniform block, rewrites iChannel sampler references, and packs custom u_* uniforms into a GPU buffer.

You never write layout qualifiers, binding numbers, or sampler declarations. The contract handles it.

Built-in Uniforms

Always available in every shader, no declaration needed:

UniformTypeDescription
uTimefloatSeconds since instrument start (from FrameCtx.time)
uFrameuintMonotonic frame counter (from FrameCtx.frame)
uResolutionvec2Output width and height in pixels
iChannel0sampler2DFirst upstream texture (or external texture)
iChannel1sampler2DSecond upstream texture
iChannel2sampler2DThird upstream texture
iChannel3sampler2DFourth upstream texture
v_uvvec2 (in)Interpolated UV coordinates [0,1]
fragColorvec4 (out)Output pixel color — write this

Minimal shader

// assets/shaders/main.frag
void main() {
    vec2  uv  = v_uv;
    float t   = uTime;
    vec3  col = 0.5 + 0.5 * cos(t + uv.xyx + vec3(0,2,4));
    fragColor = vec4(col, 1.0);
}

Custom Uniforms

Declare any uniform float u_name; in your shader. The compat layer detects it, strips the declaration, and injects a CustomBlock GLSL uniform block at GPU binding 6. The values come from NodeConfig.uniforms.

// In shader — declare custom uniforms freely:
uniform float u_threshold;  // solarize threshold
uniform float u_mix;        // wet/dry blend

void main() {
    vec4 src   = texture(iChannel0, v_uv);
    vec3 inv   = 1.0 - src.rgb;
    vec3 solar = mix(src.rgb, inv, step(u_threshold, src.rgb));
    fragColor  = vec4(mix(src.rgb, solar, u_mix), 1.0);
}
// In Rust — supply values via NodeConfig.uniforms:
let mut cfg = NodeConfig::default();
cfg.frag_shader = shader_source.clone();
cfg.uniforms.insert("u_threshold".into(), 0.5);
cfg.uniforms.insert("u_mix".into(), 1.0);
GOTCHA — Values packed in declaration order Custom uniform values are packed into the GPU buffer in the order the compat layer discovers them in the shader source. Always insert values via the name key — don't rely on HashMap ordering.

Hot-Reload

Use AssetWatcher to watch your shader directory. When any file changes, drain the watcher, reload the source, and assign it to NodeConfig.frag_shader. The pipeline recompiles lazily on the next execute_frame().

use scheng_hotreload::watcher::AssetWatcher;

let mut watcher = AssetWatcher::new("assets").unwrap();

// In tick():
if !watcher.drain().is_empty() {
    self.shader_src = std::fs::read_to_string("assets/shaders/main.frag").ok();
    // No pipeline.clear() needed — hash mismatch triggers recompile automatically
    log::info!("Shader reloaded");
}

ParamStore

ParamStore is the central parameter state machine. MIDI and OSC threads write targets. Each tick, step_frame() advances smoothed values toward targets. Shaders read smoothed values through NodeConfig.uniforms.

MIDI Thread CC1 = 64 ParamStore targets["u_tbar"] = 0.5 step_frame() → values NodeConfig uniforms["u_tbar"] GLSL Shader uniform float u_tbar

Defining a schema

use scheng_param_store::{ParamStore, ParamSchema, schema::ParamDef};

let store = ParamStore::new(ParamSchema {
    version: 1,
    params: vec![
        ParamDef {
            name:         "u_tbar".into(),
            ty:           "float".into(),
            min:          0.0,  max: 1.0,  default: 0.0,
            smooth:       0.05,         // 0=instant, higher=slower
            midi_cc:      Some(1),       // CC1 → u_tbar
            midi_channel: None,           // None = any channel
            osc_addr:     Some("/scheng/tbar".into()),
            node_label:   None,
            description:  None,
        },
    ],
});

Reading values each tick

// ALWAYS call step_frame() first — this is the smoother
let uniforms = {
    let mut store = param_store.lock().unwrap();
    store.step_frame();               // ← REQUIRED. Without this, MIDI does nothing visible.
    store.all_values().clone()        // HashMap<String, f32> — all params
};

// Assign to NodeConfig — params reach shader automatically
cfg.uniforms = uniforms;

MIDI Input

Use midir directly — it gives the most control and matches the shadecore pattern. Open in resumed(), store the live connection in your instrument struct.

use midir::{MidiInput, Ignore};

// In resumed():
let mut midi = MidiInput::new("my-instrument").ok()?;
midi.ignore(Ignore::None);
let ports = midi.ports();
for p in &ports {
    log::info!("MIDI port: '{}'", midi.port_name(p).unwrap_or_default());
}
let port = ports.into_iter().next()?;
let name = midi.port_name(&port).unwrap_or_default();
let conn = midi.connect(&port, "midi-in", move |_ts, msg, _| {
    if msg.len() == 3 && (msg[0] & 0xF0) == 0xB0 {
        // CC message: channel = msg[0] & 0x0F, cc = msg[1], val = msg[2]
        log::info!("[MIDI] CC{} = {}", msg[1], msg[2]);
        store_clone.lock().unwrap().set_by_midi_cc(msg[1], msg[2]).ok();
    }
}, ()).ok()?;
log::info!("MIDI connected: '{}'", name);
macOS — Enable IAC Driver Audio MIDI Setup → Window → Show MIDI Studio → IAC Driver → tick "Device is online". This enables software MIDI routing between MaxMSP, Ableton, and scheng.

OSC Input

use scheng_control_osc_wgpu::OscReceiver;

let osc = OscReceiver::bind("127.0.0.1:9000").unwrap();

// In tick() — drains any pending OSC messages into the store:
osc.poll(&mut *store.lock().unwrap());

OSC address format: /scheng/u_param_name, or whatever you set in ParamDef.osc_addr.

Smoothing Reference

smoothBehaviorFrames to targetBest for
0.0Instant — target = value immediately1Buttons, gates, triggers
0.05Fast~20Faders, most CC controls
0.10Medium~40Gradual parameter shifts
0.15Slow~60Ambient color, atmosphere
0.50Very slow drift~200+Auto-evolving values

The formula is: value = value + (target - value) * (1.0 - smooth) per frame at the target framerate.


Webcam Input

use scheng_input_webcam::Webcam;

// List available cameras
let cams = Webcam::list_cameras();
for (i, name) in cams.iter().enumerate() {
    log::info!("Camera [{}]: {}", i, name);
}

// Open — nokhwa scales to render resolution internally
let cam = Webcam::open(1, width, height, &device, &queue)
    .map(|c| { log::info!("Webcam {}×{}", c.width(), c.height()); c })?;

// Each tick — poll new frame, inject as iChannel0
cam.poll(&queue);
cfg.input_textures[0] = cam.texture_arc();
cfg.frag_shader = Some("void main() {
    fragColor = texture(iChannel0, vec2(v_uv.x, 1.0 - v_uv.y));
}".into());
Y-FLIP REQUIRED Webcam frames are vertically inverted relative to GPU UV space. Always use vec2(v_uv.x, 1.0 - v_uv.y) when sampling webcam textures, or the image will appear upside down.

Run with --list-cameras to discover indexes. On macOS, camera index 0 is often OBS Virtual Camera; FaceTime HD is typically index 1.

Video File Input

use scheng_input_video::VideoDecoder;

let mut vid = VideoDecoder::open("clip.mp4", &device, &queue)?;
log::info!("{}×{} @ {:.1}fps", vid.width(), vid.height(), vid.fps());

// Each tick — upload frame at current time (loops automatically)
vid.upload_frame(time_secs, &queue);
cfg.input_textures[0] = vid.texture_arc();
cfg.frag_shader = Some(passthrough_yflip.into());

Supported: any container ffmpeg can decode — mp4, mov, avi, mkv, ProRes, H.264, H.265. BICUBIC scaling. Loops at end of file.

Syphon Receive macOS

use scheng_input_syphon::SyphonReceiver;

// Deferred init — call from frame 5+ (run loop must be active)
if self.frame >= 5 && !self.syphon_initialized {
    self.syphon_initialized = true;
    let servers = SyphonReceiver::list_servers(mtl_ptr);
    for s in &servers { log::info!("{} from {}", s.name, s.app); }
    self.recv = SyphonReceiver::connect("OBS", mtl_ptr, &device, &queue).ok();
}

// Each tick:
if let Some(ref mut recv) = self.recv {
    recv.poll_with_device(&device, &queue);
    cfg.input_textures[0] = recv.texture_arc();
}
DEFERRED INIT REQUIRED The SyphonServerDirectory needs the macOS run loop active to discover sources. Querying it during resumed() — before the first about_to_wait() fires — yields zero results. Always defer to frame 5 or later.

NDI Receive

use scheng_input_ndi::NdiReceiver;

let sources = NdiReceiver::find_sources(2000)?;
let mut recv = NdiReceiver::connect(&sources[0].name, &device, &queue)?;

// Each tick:
if recv.poll(&device, &queue) {
    cfg.input_textures[0] = recv.texture_arc();
}

RTMP / RTSP Receive

use scheng_input_rtmp::RtmpReceiver;

let mut recv = RtmpReceiver::open(
    "rtmp://localhost:1935/live/key",
    width, height, &device, &queue
)?;

// Each tick:
recv.poll(&device, &queue);
cfg.input_textures[0] = recv.texture_arc();

Any URL ffmpeg can read: rtmp://, rtsp://, srt://, http:// (HLS). The receiver spawns ffmpeg as a subprocess.


Output Sinks

All outputs implement the OutputSink trait. Pass any sink to execute_frame(). Multiple sinks can be active simultaneously — call execute_frame() once per sink per frame.

// Multiple simultaneous outputs
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut preview)?;
runtime.execute_frame(&g, &plan, &configs, &ctx, &mut syphon_sink)?;
PERFORMANCE NOTE Each execute_frame() call re-executes the render graph. For Syphon (zero-copy Metal texture sharing), this is effectively free. For FFmpeg (GPU readback), it adds a frame readback cost. Minimize FFmpeg sinks at high framerates.

Preview Window

A PreviewSink blits the render target to the winit window via a WGSL passthrough shader. Resize-aware — updates surface config on WindowEvent::Resized. See any template for the complete implementation pattern.

Syphon Output macOS

use scheng_output_syphon::SyphonSink;

let mut sink = SyphonSink::new("my-instrument")?;
// Now visible to Resolume, OBS, VDMX as "my-instrument"

runtime.execute_frame(&g, &plan, &configs, &ctx, &mut sink)?;

Zero-copy Metal texture sharing on Apple Silicon. The GPU texture is shared directly with the receiving application — no CPU readback, no copy.

NDI Output

use scheng_output_ndi::{NdiSink, NdiConfig};

let mut sink = NdiSink::new(NdiConfig {
    source_name:   "my-instrument".into(),
    framerate_num: 30,
    framerate_den: 1,
    ..Default::default()
})?;

FFmpeg Output (RTMP / RTSP / File)

use scheng_output_ffmpeg::{FfmpegSink, FfmpegConfig, config::OutputTarget};

// RTMP stream → YouTube / Twitch / OBS ingest
let mut sink = FfmpegSink::new(FfmpegConfig {
    width: 1280, height: 720, framerate: 30,
    target: OutputTarget::Rtmp { url: "rtmp://a.rtmp.youtube.com/live2/STREAM_KEY".into() },
    ..Default::default()
})?;

// RTSP (requires mediamtx or similar)
let mut sink = FfmpegSink::new(FfmpegConfig {
    target: OutputTarget::Rtsp { url: "rtsp://localhost:8554/live".into() },
    ..Default::default()
})?;

// File recording — finalizes cleanly on instrument exit
let mut sink = FfmpegSink::new(FfmpegConfig {
    target: OutputTarget::File { path: "output.mp4".into(), overwrite: true },
    ..Default::default()
})?;

All FFmpeg output is automatically tagged with bt.709 colorspace metadata. See Colorspace.

Local RTMP/RTSP testing with mediamtx

brew install mediamtx

# Config: accept any stream key
printf "paths:\n  all_others:\n" > /tmp/mtx.yml
mediamtx /tmp/mtx.yml

# Receive in VLC
vlc rtmp://localhost:1935/live/key

FrameCtx

The immutable per-frame execution context. The engine derives all temporal shader uniforms from this structure. The engine never owns or accumulates time — all temporal semantics originate here.

pub struct FrameCtx {
    pub width:        u32,   // render width → uResolution.x
    pub height:       u32,   // render height → uResolution.y
    pub time:         f32,   // seconds → uTime (you control this)
    pub frame:        u64,   // counter → uFrame (you increment this)
    pub sample_count: u32,   // 1=off, 4=4xMSAA, 8=8xMSAA
}

Valid time patterns: monotonic wall-clock, manual scrub, looping domains, discontinuous jumps. The runtime does not interpret or constrain time — it forwards it to shaders verbatim.

Changing width or height between frames triggers render target reallocation (GPU memory only, fast). Excessive resolution changes can cause frame drops.

Per-Frame Lifecycle

Every frame proceeds in exactly this order. The hot path avoids allocation, shader recompilation, and blocking I/O.

  1. Poll inputs
    Webcam.poll(), VideoDecoder.upload_frame(), Syphon.poll_with_device() — upload latest frame pixels to GPU textures on dedicated threads.
  2. Step params
    store.step_frame() — advance smoother: targets → values. MIDI/OSC threads write targets; this step moves values.
  3. Build NodeConfigs
    Assemble HashMap<NodeId, NodeConfig> — shader source, uniforms from ParamStore, input textures from devices.
  4. graph.compile() (if needed)
    Only required on topology change. For most instruments this is called once at startup and the plan is reused.
  5. execute_frame()
    One wgpu RenderPass per non-output node, in topological order. All passes encoded into a single CommandEncoder.
  6. queue.submit()
    All GPU work submitted as one batch. Output sinks are presented after submit — never before.
  7. Present output sinks
    sink.present() called for each registered sink: preview window blit, Syphon texture share, NDI frame push, FFmpeg pipe write.

Determinism

For identical Graph + NodeConfig + FrameCtx: execution order, resource bindings, and shader inputs are always identical. This makes scheng instruments replayable and debuggable offline.

The graph guarantees structural determinism at compile time. The runtime enforces execution determinism. Shader determinism depends on the shader itself — avoid sampling gl_FragCoord without normalizing, and avoid platform-specific GLSL extensions.

Threading Model

GPU render thread

Single threaded. Owns the wgpu Device and Queue. Runs the winit event loop. Calls execute_frame() and present().

Input/Output threads

MIDI callback, OSC UDP receiver, video decode, RTMP receive, NDI receive all run on dedicated threads. Communicate via Arc<Mutex<ParamStore>> or bounded channels.

The bounded channel pattern (try_send semantics) ensures the GPU render thread never blocks waiting for an input. When a source falls behind, frames are dropped gracefully.


Rgba16Float Pipeline

All internal render targets use Rgba16Float — 16-bit half-float precision per channel. This gives 65,536 distinct values per channel instead of 256 (8-bit), eliminating quantization banding that would otherwise accumulate across multiple shader passes.

StageFormatWhy
Internal render targetsRgba16Float16-bit — zero banding across passes
Webcam inputRgba8UnormCamera hardware delivers 8-bit
Video inputRgba8Unormffmpeg decodes to 8-bit RGBA
NDI inputRgba8UnormNDI SDK delivers 8-bit
Syphon inputRgba8UnormBGRA→RGBA swap in ObjC bridge
FFmpeg outputYUV420PBroadcast standard; f16→u8 in readback
Preview displaysRGB surfaceAuto-selected by wgpu for display

bt.709 Colorspace

All FFmpeg output is automatically tagged with Rec.709 metadata. Without these flags, players and broadcast systems default to bt.601 (SD standard), causing visibly shifted colors on HD content.

// Automatically applied to all FfmpegSink output:
-colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range tv

MSAA Anti-Aliasing

Set sample_count in FrameCtx. The pipeline is keyed on (shader_hash, sample_count) — switching between values recompiles the pipeline once, then caches.

sample_countQualityMemory costBest for
1OffBaselineOrganic / fluid shaders
44× MSAA~2×Geometric content, hard edges
88× MSAA~4×Highest quality, sufficient GPU budget

4K / Resolution

Resolution is a FrameCtx parameter — change it between frames for dynamic reallocation. Render targets are allocated on demand at the first execute_frame() for a given size.

cargo run --release -- --width 3840 --height 2160 --msaa 4

Crate Map

This is a system ownership map, not an API reference. Core crates change slowly. Edge crates (I/O) evolve independently without breaking core guarantees.

Core

CrateOwnsDoes NOT own
scheng-graphNode instantiation, port definitions, topology validation, compile()Shader compilation, GPU resources, execution
scheng-runtime-wgpuWgpuRuntime, execute_frame, render targets, pipeline cache, CustomBlockTopology mutation, control protocols, time
scheng-param-storeParamStore, ParamSchema, smoothing, MIDI/OSC routing tablesProtocol parsing, device management
scheng-hotreloadAssetWatcher, file change eventsShader compilation (that's the runtime)

Inputs

CrateDescription
scheng-input-midiMidiInput wrapper around midir (prefer using midir directly)
scheng-control-osc-wgpuOscReceiver, UDP socket, address → ParamStore routing
scheng-input-webcamWebcam open/poll, nokhwa, MJPEG/YUYV decode, GPU texture upload
scheng-input-videoVideoDecoder, ffmpeg-next, BICUBIC scaling, looping, texture_arc()
scheng-input-ndiNdiReceiver, grafton-ndi, NewTek SDK
scheng-input-syphonSyphonReceiver, ObjC Metal bridge, SyphonServerDirectory (macOS)
scheng-input-rtmpRtmpReceiver, ffmpeg subprocess, bounded channel, any ffmpeg-readable URL

Outputs

CrateDescription
scheng-output-syphonSyphonSink, Metal texture sharing, zero-copy (macOS)
scheng-output-ndiNdiSink, NewTek SDK, frame push
scheng-output-spoutSpoutSink stub, DX texture sharing (Windows) — coming soon, C++ bridge in progress
scheng-output-ffmpegFfmpegSink, RTMP/RTSP/file, H.264, bt.709, ffmpeg subprocess

CLI Flags

FlagDefaultDescription
--width N1280Render width in pixels
--height N720Render height in pixels
--msaa N1MSAA sample count — 1, 4, or 8
--webcam NOpen camera at index N (0-based)
--list-camerasPrint available cameras and exit
--video pathOpen video file for looping playback
--video-a pathVideo file for channel A (video-mixer)
--video-b pathVideo file for channel B (video-mixer)
--syphon-a nameSyphon receive source name, channel A
--syphon-b nameSyphon receive source name, channel B
--stream urlRTMP or RTSP output URL
--record pathRecord to file (H.264 mp4)

Known Constraints & Non-Goals

Understanding what the engine deliberately does not do is as important as understanding what it does. These are intentional architectural constraints — not bugs.

The engine does not own time

No play/pause, no internal frame clock, no transport logic. All temporal semantics are supplied via FrameCtx. This enables replayability and offline rendering.

The engine is protocol-agnostic

No OSC parsing, no MIDI handling, no network sockets in core crates. Control values enter only via NodeConfig.uniforms. Parsing belongs in the instrument.

Device management is external

No webcam enumeration, no capture APIs in the engine. The engine consumes textures (Arc<wgpu::Texture>). It does not manage hardware.

Node kinds must remain minimal

Prefer extending behavior through new shaders or NodeConfig parameters. Avoid adding node kinds unless a new structural execution role is genuinely required.

THE PATTERN When ownership is unclear, ask: does this belong in the graph (topology), NodeConfig (per-frame config), runtime (execution), or the instrument layer (everything else)? If it's "everything else," it belongs in the instrument.

Testing

# All workspace tests — headless GPU, no window or display required
cargo test --workspace --exclude scheng-example-instrument

# Single crate
cargo test -p scheng-runtime-wgpu

# With stdout
cargo test -p scheng-runtime-wgpu --test headless -- --nocapture

# Shader library tests
cargo test -p scheng-runtime-wgpu --test shader_library

Tests use wgpu's headless Metal/Vulkan backend — no display server needed. For CI without a physical GPU, set WGPU_BACKEND=gl.

CrateTest typeWhat it covers
scheng-runtime-wgpuGPU integrationRender, uniforms, feedback, custom u_* params
scheng-runtime-wgpuShader libraryCrossfade, keyer, wipe transitions, solarize
scheng-graphUnitGraph compile, edge validation, cycle detection
scheng-param-storeUnitSchema, MIDI CC mapping, smoothing arithmetic
scheng-output-ffmpegIntegrationConfig validation, ffmpeg args, file recording

Troubleshooting

Library not loaded: Syphon.framework

rpath is missing. Verify build.rs has the rpath entries and that scheng/vendor/Syphon.framework exists. Never set DYLD_FRAMEWORK_PATH — rpath makes it unnecessary.

MIDI arrives but shader doesn't change

Almost always step_frame() is missing. MIDI writes to targets. The smoother only advances values when step_frame() is called. Both are required every tick.

store.step_frame();  // ← without this, store.get() always returns the default

Custom uniforms are always 0.0

Either (a) the value isn't in NodeConfig.uniforms, or (b) step_frame() isn't called. Check both.

cfg.uniforms.insert("u_threshold".into(), 0.5);  // must exist

Webcam is upside down

Missing Y-flip. Always sample webcam textures with inverted V: vec2(v_uv.x, 1.0 - v_uv.y).

Webcam: no compatible format

Camera doesn't support MJPEG at requested resolution. Use --list-cameras to find valid indexes. Try FaceTime HD (usually index 1 on macOS) at 1280×720.

Syphon shows zero sources

Run loop not active yet. Defer SyphonReceiver init to frame 5+. See Syphon Receive.

RTMP: Connection refused

No RTMP server running. Start mediamtx with paths: all_others: config. See FFmpeg Output.

All pixels black

Output sinks must be called after queue.submit(). This is handled automatically by execute_frame() — don't call present() manually before submitting the command encoder.

Shader compile error on startup

Check the log for the GLSL error text. Common causes: GLSL 4.x features not supported by naga (use 3.3 features only), missing void main(), or undeclared variable. The error text includes line numbers relative to the user shader body.


Glossary

Canonical terminology. When these terms appear in SDK docs, they carry these precise meanings.

Instrument
A user-defined application built on the scheng engine. Owns graph construction, time, control input, device management, and output routing. The engine does not define instruments.
Engine
The collective core crates: scheng-graph, scheng-runtime-wgpu, scheng-param-store. Deterministic, execution-focused. Does not own time, control protocols, or devices.
Graph
The static, directed acyclic topology describing how nodes connect. Defines structure only — no GPU work. Compiled once into an ExecutionPlan.
ExecutionPlan
An immutable, topologically sorted plan produced by graph.compile(). Safe for repeated per-frame evaluation. Cannot be modified once compiled.
Node
A single unit of computation. Consumes and produces textures. Behavior is determined by GLSL shader source — node kind describes structural role only.
NodeId
An opaque identifier assigned by the Graph when add_node() is called. Stable for the lifetime of a graph instance. Used as key into NodeConfig maps.
NodeConfig
The complete per-node configuration surface for one frame: shader source, uniforms HashMap, input textures. Owned by the instrument, read by the runtime. Replaces the prior NodeProps concept.
FrameCtx
The immutable per-frame context: width, height, time, frame, sample_count. All temporal shader uniforms (uTime, uFrame, uResolution) derive from here. The engine never generates or mutates FrameCtx.
ParamStore
The live parameter state machine. MIDI/OSC write targets; step_frame() advances smoothed values toward targets; shaders read smoothed values via NodeConfig.uniforms.
Shader Contract
The explicit interface between runtime and GLSL programs: what the engine guarantees to inject (standard uniforms, iChannel bindings, CustomBlock) and what the shader must provide (void main(), valid fragColor output).
CustomBlock
A GLSL uniform block at GPU binding 6 containing all custom u_* uniform values, packed in declaration order and sourced from NodeConfig.uniforms.
OutputSink
A trait implementing present() — called after queue.submit() each frame. Implementations: PreviewSink, SyphonSink, NdiSink, FfmpegSink.
Hot Path
The per-frame execution path inside execute_frame(). Must avoid per-frame allocation, shader recompilation, and blocking I/O.
Lazy Compilation
Shader programs are compiled only on first use and cached per (shader_hash, sample_count). Recompilation triggers only when shader source changes.
Topology vs Configuration
Topology (Graph) is static after compile(). Configuration (NodeConfig) is dynamic and changes every frame. Never conflate them.