Skip to main content

Lightricks Unveils Open-Source AI That Creates Videos With Sound in Seconds

Lightricks Breaks New Ground With LTX-2 AI Video Generator

In a move that could democratize video creation, Lightricks has open-sourced its cutting-edge LTX-2 system - an AI that produces high-quality videos complete with synchronized audio in mere seconds. This breakthrough challenges conventional approaches by handling sight and sound simultaneously rather than sequentially.

How It Works: Seeing and Hearing Together

The secret lies in LTX-2's sophisticated dual-stream architecture. While most systems generate visuals first then add sound, this model mirrors real-world perception by processing both streams concurrently. With 19 billion parameters total (1.4B for video, 5B for audio), the asymmetric design reflects how humans naturally prioritize auditory information.

Image

"Traditional methods create an artificial separation," explains the development team. "Our brains don't process a car crash visually then auditorily - we experience both together instantly."

Blazing Speed Meets Practical Applications

Performance tests reveal staggering efficiency:

  • Generates 720p content at 1.22 seconds per step
  • Operates 18x faster than comparable systems
  • Handles 20-second sequences
    • surpassing Google's benchmarks The system particularly shines when depicting cause-and-effect scenarios, like matching glass-breaking sounds precisely with visual shattering moments.

Image

Why Open-Source Matters

Founder Ziv Faberman emphasizes accessibility: "Creators should control their tools, not depend on corporate gatekeepers." The decision to release LTX-2 publicly contrasts sharply with competitors' closed ecosystems.

The model does face some limitations:

  • Occasional glitches with rare dialects or multi-speaker dialogue
  • Challenges maintaining sync beyond 20 seconds But these hurdles seem minor compared to its transformative potential.

The complete framework is now available online, optimized for consumer-grade GPUs - meaning anyone with decent hardware can experiment with professional-grade audiovisual generation.

Key Points:

  • Simultaneous processing of audio and visual streams mimics human perception
  • Open-source model prioritizes creator control over walled gardens
  • Remarkable speed: Generates HD clips faster than competitors
  • Practical applications: Ideal for content creators needing quick, high-quality video production

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

LTX-2 AI Model Revolutionizes Video Generation with 4K Output

Lightricks unveils LTX-2, a groundbreaking AI video generation model capable of producing 20-second 4K narrative videos with synchronized audio-visual output. The open-source solution runs locally on consumer GPUs and offers unprecedented creative control.

October 31, 2025
AI-video-generationLTX-24K-content
ByteDance, HK Universities Open-Source DreamOmni2 AI Image Editor
News

ByteDance, HK Universities Open-Source DreamOmni2 AI Image Editor

ByteDance and Hong Kong universities have open-sourced DreamOmni2, a breakthrough AI image editing system that understands abstract concepts through multimodal instructions. The technology outperforms existing open-source models and approaches commercial solutions.

October 27, 2025
AI-image-editingmultimodal-AIopen-source-AI
Meituan Unveils LongCat-Video Model for Advanced AI-Generated Content
News

Meituan Unveils LongCat-Video Model for Advanced AI-Generated Content

Meituan's LongCat team has launched LongCat-Video, a groundbreaking AI model capable of generating high-quality videos up to 5 minutes long. Using Diffusion Transformer architecture, it offers text-to-video, image-to-video, and video continuation features with superior coherence and quality. The model achieves state-of-the-art performance while improving inference speed by 10x.

October 27, 2025
AI-video-generationDiffusionTransformercomputer-vision
Meituan Unveils LongCat-Video Model for 5-Minute AI-Generated Content
News

Meituan Unveils LongCat-Video Model for 5-Minute AI-Generated Content

Meituan has launched LongCat-Video, a groundbreaking AI model capable of generating high-quality, continuous 5-minute videos. Built on Diffusion Transformer architecture, it supports text-to-video, image-to-video, and video continuation tasks without additional adaptation. The model maintains temporal consistency and avoids quality degradation in long-form content.

October 27, 2025
AI-video-generationDiffusionTransformerMeituan-tech
Doubao Video Model Seedance 1.0 Pro Adds Keyframe Control
News

Doubao Video Model Seedance 1.0 Pro Adds Keyframe Control

Volc Engine's Doubao-Seedance-1.0-pro introduces advanced keyframe capabilities, enhancing AI video generation with improved subject consistency, motion physics, and narrative control for both professional creators and casual users.

October 23, 2025
AI-video-generationVolcEngineSeedance-Pro
Vidu Q2 AI Video Platform Opens API Access Globally
News

Vidu Q2 AI Video Platform Opens API Access Globally

Shengshu Technology has fully opened API access for its Vidu Q2 AI video generation platform, marking a significant advancement in AI-driven video production. The platform offers enhanced realism, emotional expression in digital characters, and specialized tools for advertising, animation, and e-commerce sectors.

October 23, 2025
AI-video-generationShengshu-Technologycreative-AI