AI Personal Learning
and practical guidance
Beanbag Marscode1

Story-Flicks: Input topics to automatically generate children's short story videos

Post was updated on 2025-03-12 18:47, Part of the content is time-sensitive, if it is not working, please leave a message!

General Introduction

Story-Flicks is an open source AI tool focused on helping users quickly generate HD story videos. Users only need to input a story topic , the system will generate story content through the large language model , and combined with AI generated images , audio and subtitles , the output of the complete video work . The backend of the project is based on Python and FastAPI framework, and the frontend is built with React, Ant Design and Vite. It supports OpenAI, AliCloud, DeepSeek and other model service providers, and users can flexibly choose text and image generation models. Whether it's creating children's stories, short animations, or teaching videos, Story-Flicks can easily meet the demand, and is very suitable for developers, creators, and educators.

Story-Flicks: Input topics to automatically generate short story videos-1


 

Function List

  • Generate video with one click: Enter a story topic and automatically generate a video containing images, text, audio and subtitles.
  • Multi-model support: Compatible with OpenAI, Aliyun, DeepSeek, Ollama and SiliconFlow of text and image modeling.
  • Segment customization: The user can specify the number of story paragraphs, and each paragraph generates a corresponding image.
  • multilingual output: Support for text and audio generation in multiple languages, adapted for global users.
  • open source deployment: Provides both manual installation and Docker deployment for easy local operation.
  • intuitive interface: The front-end page is easy to use and supports parameter selection and video preview.

 

Using Help

Installation process

Story-Flicks offers two installation methods: manual installation and Docker deployment. Below are the detailed steps to ensure that users can build the environment smoothly.

1. Manual installation

Step 1: Download the project
Clone the project locally by entering the following command in the terminal:


git clone https://github.com/alecm20/story-flicks.git

Step 2: Configure Model Information
Go to the backend directory and copy the environment configuration file:


cd backend
cp .env.example .env

show (a ticket) .env file to configure the text and image generation model. Example:


text_provider="openai" # text generation service provider, can choose openai, aliyun, deepseek, etc.
image_provider="aliyun" # image generation service provider, can choose openai, aliyun, etc.
openai_api_key="Your OpenAI key" # API key for OpenAI.
aliyun_api_key="Your Aliyun key" # Aliyun API key
text_llm_model="gpt-4o" # Text model, e.g. gpt-4o
image_llm_model="flux-dev" # Image model, e.g. flux-dev
  • If you choose OpenAI, it is recommended to use the gpt-4o as a textual model.dall-e-3 as an image model.
  • If you choose AliCloud, it is recommended to use qwen-plus maybe qwen-max(text model) and flux-dev(image model, currently available for free trial, see details atAliCloud Documentation).
  • Save the file when the configuration is complete.

Step 3: Launch the backend
Go to the backend directory in the terminal, create the virtual environment and install the dependencies:


cd backend
conda create -n story-flicks python=3.10 # Create Python 3.10 environment
conda activate story-flicks # activate environment
pip install -r requirements.txt # Install dependencies
uvicorn main:app --reload # Start the backend service

After successful startup, the terminal will display:


INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Application startup complete.

This indicates that the back-end service has run http://127.0.0.1:8000The

Step 4: Launch the front end
Go to the front-end directory in a new terminal, install the dependencies and run it:


cd frontend
npm install # Install frontend dependencies
npm run dev # to start the frontend service.

After successful startup, the terminal displays:


VITE v6.0.7 ready in 199 ms
➜ Local: http://localhost:5173/

Accessed in a browser http://localhost:5173/You can see the front-end interface.

2. Docker deployment

Step 1: Prepare the environment
Make sure that Docker and Docker Compose are installed locally; if not, download them from the official website.

Step 2: Starting the Project
Run it in the project root directory:

docker-compose up --build

Docker automatically builds and starts the front- and back-end services. When it's done, access the http://localhost:5173/ View the front-end page.

Usage

After installation, users can generate story videos through the front-end interface. The following is the specific operation flow:

1. Access to the front-end interface

Type in your browser http://localhost:5173/The Story-Flicks main page opens.

2. Setting the generation parameters

The interface provides the following options:

  • Text Generation Model Provider: Selection openai,aliyun etc.
  • Image Generation Model Provider: Selection openai,aliyun etc.
  • text model: Enter the model name, e.g. gpt-4o maybe qwen-plusThe
  • imagery model: Enter the model name, e.g. flux-dev maybe dall-e-3The
  • Video Language: Select a language, such as Chinese or English.
  • Voice type: Select an audio style, such as male or female.
  • Story topics: Enter a theme, e.g. "The Adventures of the Rabbit and the Fox".
  • Number of story paragraphs: Enter a number (e.g., 3), with each segment corresponding to an image.

3. Video generation

After filling in the parameters, click "Generate" button. The system will generate the video according to the settings. The generation time is related to the number of paragraphs, the more paragraphs the longer it takes. When finished, the video will be displayed on the page, support playback and download.

caveat

  • If the generation fails, check the .env file for the correct API key, or verify that the network connection is working.
  • utilization Ollama When setting the ollama_api_key="ollama"Recommended qwen2.5:14b or larger models, smaller models may not work as well.
  • SiliconFlow's image model has only been tested so far. black-forest-labs/FLUX.1-dev, make sure to select a compatible model.

Featured Function Operation

Generate full video with one click

In the interface, enter "The story of the wolf and the rabbit", set 3 paragraphs and click "Generate". After a few minutes, you will get a video with 3 images, voiceover and subtitles. For example, the official demo video shows the stories "The Rabbit and the Fox" and "The Wolf and the Rabbit".

Multi-language support

Want to generate English video? Set the "Video Language" to "English" and the system will generate English text, audio and subtitles. Switching to other languages is just as easy.

Customized Segmentation

Need a longer story? Set the number of paragraphs to 5 or more. Each paragraph generates a new image and the story expands accordingly.

With these steps, users can easily install and use Story-Flicks to quickly create HD storytelling videos. Whether it's for personal entertainment or educational use, this tool can help you get creative!

CDN1
May not be reproduced without permission:Chief AI Sharing Circle " Story-Flicks: Input topics to automatically generate children's short story videos

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish