artificial intelligence technology
Business Wire
Published on : Nov 7, 2025
Daydream, the open-source community hub for real-time AI video and world models, just took a major step toward unifying one of the most fragmented corners of AI. The company announced two significant releases: Daydream Scope, a development environment built for real-time AI workflows, and full SDXL support inside the Daydream API and Playground. Together, they push Daydream closer to becoming the open backbone of real-time video generation—a space that has quickly become essential to creative technologists, virtual production teams, and developers racing toward fully interactive AI systems.
Daydream Scope is a local, open-source toolkit designed for anyone building or experimenting with real-time video and world-model pipelines. It gives developers modular interfaces to plug in models for inference, control, and remixing—offering the flexibility needed to explore emerging real-time techniques.
Eric Tang, co-founder of Livepeer, the parent company behind Daydream, framed Scope as a foundational shift. “It gives creative technologists and builders an extensible workspace to experiment with real-time AI pipelines,” he said. Whether users are testing generative video models, constructing virtual production systems, or conducting academic research, Scope is designed as a shared sandbox where innovation can spread quickly.
Scope is still in community alpha, but it’s already delivering support for models such as LongLive, StreamDiffusionV2, and Krea Realtime 14B. New models are being added at a steady pace, underscoring the momentum behind open-source real-time video tooling.
Daydream’s SDXL release brings a substantial upgrade in quality and control. Built on StreamDiffusion’s open architecture, SDXL merges multiple research threads into a production-ready stack. The result is high-fidelity, low-latency video generation that creators can tune and manipulate in real time.
Key capabilities include:
IPAdapter Style Control
Standard mode enables dynamic artistic style transfer, while FaceID mode maintains character consistency across frames.
Multi-ControlNet Precision
Support for HED, Depth, Pose, Tile, and Canny ControlNets delivers spatial and temporal accuracy that gives creators unmatched control.
TensorRT Acceleration
NVIDIA-optimized inference boosts playback to 15–25 FPS, even with complex model combinations.
For users who prefer SD1.5, Daydream hasn’t left them behind. The company paired SD1.5 with accelerated IPAdapters to offer high-framerate style transfer and improved creative consistency.
The open ecosystem approach is paying off early. Developers like DotSimulate—the creator behind StreamDiffusionTD, a popular TouchDesigner tool—are already folding Daydream’s SDXL support into new applications. Others are experimenting with the Daydream API or self-hosting the open-source StreamDiffusion fork to build customized SDXL-based systems.
These early integrations reinforce Daydream’s vision: a real-time AI video stack that connects models, creators, and infrastructure in one coherent ecosystem.
As real-time video generation accelerates and world models inch closer to mainstream adoption, Daydream is establishing itself as one of the rare open platforms offering both speed and transparency. With Scope and SDXL now live, the company is setting the stage for the next wave of real-time AI creativity.
Get in touch with our MarTech Experts.