AI Personal Learning
and practical guidance

StreamingT2V: A Dynamic and Scalable Generation Technique from Text to Long Video

General Introduction

StreamingT2V is a public project developed by the Picsart AI research team focused on generating coherent, dynamic and scalable long videos based on textual descriptions. This technology uses an advanced autoregressive approach that guarantees a temporally consistent video that closely corresponds to the description text and maintains high frame quality images. It is capable of generating videos up to 1200 fps and up to two minutes in length, with the potential to scale to longer periods of time. The effectiveness of the technique is not limited by a specific Text2Video model, i.e., improvements in the model will further enhance the video quality.

StreamingT2V Online Experience


 

StreamingT2V: A Dynamic and Scalable Generation Technique from Text to Long Video

 

Function List

Supports generation of videos up to 1200 fps and up to two minutes in length.
Maintains temporal consistency of video and high frame quality images
Dynamic video generation that closely corresponds to the text description
Supports multiple Base modeling applications to enhance the quality of generated videos
Support Text-to-Video and Image-to-Video conversion
Provide Gradio online demo

 

 

Using Help

Clone the project repository and install the required environment
Download weights and place them in the correct directory
Run the sample code for text-to-video or image-to-video conversion
View the project page for detailed results and demos

 

inference time

 

ModelscopeT2V as a base model

 

frame rate Faster preview inference time (256×256) Reasoning time for final result (720×720)
24 frames 40 seconds. 165 seconds.
56 frames 75 seconds. 360 seconds
80 frames 110 seconds. 525 seconds.
240 frames 340 seconds. 1610 seconds (about 27 minutes)
600 frames 860 seconds. 5128 seconds (about 85 minutes)
1200 frames. 1710 seconds (about 28 minutes) 10225 seconds (about 170 minutes)
AnimateDiffas a base model

 

frame rate Faster preview inference time (256×256) Reasoning time for final result (720×720)
24 frames 50 seconds. 180 seconds.
56 frames 85 seconds. 370 seconds.
80 frames 120 seconds. 535 seconds.
240 frames 350 seconds. 1620 seconds (about 27 minutes)
600 frames 870 seconds. 5138 seconds (~85 minutes)
1200 frames. 1720 seconds (about 28 minutes) 10235 seconds (about 170 minutes)
SVDAs a basic model

 

frame rate Faster preview inference time (256×256) Reasoning time for final result (720×720)
24 frames 80 seconds. 210 seconds.
56 frames 115 seconds. 400 seconds.
80 frames 150 seconds. 565 seconds.
240 frames 380 seconds. 1650 seconds (about 27 minutes)
600 frames 900 seconds. 5168 seconds (~86 minutes)
1200 frames. 1750 seconds (approx. 29 minutes) 10265 seconds (~171 minutes)

All measurements were taken using the NVIDIA A100 (80 GB) GPU. When the number of frames exceeded 80, random blending was used. For random mixing, thechunk_sizeand the value ofoverlap_sizeare set to 112 and 32, respectively.

AI Easy Learning

The layman's guide to getting started with AI

Help you learn how to utilize AI tools at a low cost and from a zero base.AI, like office software, is an essential skill for everyone. Mastering AI will give you an edge in your job search and half the effort in your future work and studies.

View Details>
May not be reproduced without permission:Chief AI Sharing Circle " StreamingT2V: A Dynamic and Scalable Generation Technique from Text to Long Video

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish