General Introduction
FlowDown-App is a lightweight and efficient AI conversation client, developed by a team of developers using Swift and UIKit, aiming to provide users with a fast and smooth intelligent conversation experience. The app is divided into a standard version (FlowDown) and a community version (FlowDown Community). The standard version is available for download through the App Store, with a built-in free cloud model and support for more advanced features; the community version will soon be open-sourced, with the source code published on GitHub, and free for users to access and customize; FlowDown supports all OpenAI-compatible service interfaces, allowing users to access their own Large Language Models (LLMs), and has been designed with privacy protection as a core concept to ensure data security. ensure data security. The highlight of FlowDown is the extremely fast text rendering for a seamless dialog experience.
FlowDown macOS interface
FlowDown IOS Interface
Function List
- Extremely fast text rendering: Conversation text is displayed with virtually no delay, providing a smooth experience.
- Markdown Support: Output in rich text format for easy reading and organization.
- Universal compatibility: All OpenAI-compliant service providers are supported.
- Privacy by Design: User data is not collected by default, conversations stay local.
- Auto Chat Header: Intelligently generate dialog titles to improve management efficiency.
- Comes with LLM support: The Standard Edition supports both offline LLM (based on MLX) and visual LLM.
- Attachment upload: The standard version supports uploading files to interact with AI.
- Web Search: The standard version has a built-in search function to enhance access to information.
Using Help
Installation process
FlowDown is available in two ways: download the standard version from the App Store, or get the Community Edition source code from GitHub. Here are the detailed installation steps:
Standard (FlowDown)
- system requirements::
- iOS 16.0 or later (iPhone and iPad supported).
- macOS 13.0 or later.
- Download Apps::
- Open the App Store and search for "FlowDown".
- Click the "Get" button to complete the download and installation.
- launch an application::
- Once the installation is complete, click on the app icon to open it and the free cloud model will be automatically configured for the first time.
Community Edition (FlowDown Community)
- system requirements: Same as Standard, see GitHub for dependencies.
- Download source code::
- Visit https://github.com/Lakr233/FlowDown-App.
- Click the "Code" button and select "Download ZIP" or use the Git command:
git clone https://github.com/Lakr233/FlowDown-App.git
- environmental preparation::
- Make sure Xcode (for Swift development) is installed.
- Open a terminal, go to the project directory and run the dependency installation (if any):
pod install # if using CocoaPods
- Build and Run::
- Open it with Xcode
.xcodeproj
Documentation. - Select the target device (emulator or real machine), click the "Run" button to compile and run.
- Open it with Xcode
Main function operation flow
Extremely Fast Text Rendering with Markdown Support
- launch an application: Open FlowDown and go to the main screen.
- Input Issues: Enter text into the dialog box, e.g. "Write me a note in Markdown format".
- Get a reply: Click Send, and the Markdown-formatted text returned by AI is rendered at blazing speed, with headlines, lists, and more clearly rendered.
- Adjusting the display: Long press on the text area to adjust the font size or copy the content.
Customizing LLM Service Access
- Go to Settings::
- Click on the gear icon in the upper right corner to enter the settings page.
- Add Service::
- Select Connect AI Services and enter an OpenAI-compatible API key and endpoint, for example:
API Key: sk-xxxxxxxxxxxxxxxxxxxxxxxxxxx Endpoint: https://api.openai.com/v1
- Select Connect AI Services and enter an OpenAI-compatible API key and endpoint, for example:
- test connection::
- Click "Test" to confirm that the service is available and save.
- Switching Models::
- Select the newly added service in the dialog screen to get started.
Auto Chat Header
- Start a new conversation::
- Click on the "New dialog" button and enter a question.
- Generate Title::
- The system automatically generates a title based on the input content, such as "Markdown note writing".
- Management Dialogue::
- View all conversations in the list on the left and click on the title to quickly switch between them.
Featured Functions
Extremely fast text rendering
- Operation Experience: After entering a question, the response text is displayed almost instantaneously, without waiting for it to load.
- Usage Scenarios: Ideal for users who need real-time interaction, such as taking quick notes on inspiration or handling long conversations.
- Optimization adjustments: The rendering priority can be adjusted in the settings to ensure smooth operation even on low-equipped devices.
Privacy by Design
- local storage: Conversation data is saved on the device by default and is not uploaded to the cloud.
- Service Options: Users can manually specify the LLM service to avoid data leakage.
- View Privacy Policy: The settings page provides links to detailed descriptions of how the data is handled.
Standard Edition Exclusive Features: Offline LLM and Attachment Uploading
- Offline LLM (based on MLX)::
- Select "Local Model" in the settings.
- Download the supported MLX model files (official documentation is required).
- Import the model and use it offline, no network required.
- Attachment upload::
- Click the "+" button next to the dialog box.
- Select a file (supports images, PDFs, etc.), upload it, and then enter a command, such as "Analyze this image".
- The AI returns the results of the analysis, which are displayed in the dialog area.
Web search (standard version)
- Activate Search::
- After entering your question in the dialog box, click on the "Search" icon.
- Getting results::
- The system returns web page information that is integrated into the AI response.
- Optimize experience::
- You can adjust the search priority or turn off this feature in the settings.
Recommendations for use
- network environment: Maintain network stability when using cloud-based models.
- hardware requirement: Offline LLM requires a high performance device, at least 8GB of RAM is recommended.
- Access to documents: Visit https://apps.qaq.wiki/docs/flowdown/ for an official guide if you encounter problems.