General Introduction
MindSearch is an open-source AI search engine framework launched by Shanghai Artificial Intelligence Laboratory (SAL), which aims to simulate human thought process for complex information collection and integration. The tool combines the advanced technology of large-scale language modeling (LLM) and search engine, and realizes autonomous information collection and integration of hundreds of web pages through a multi-intelligence body framework, and gives comprehensive answers in a short time. Users can deploy their own search engines using closed-source LLM (e.g., GPT, Claude) or open-source LLM (e.g., InternLM2.5 series models).
The core logic is that a multi-intelligent body framework is used to model human thought processes, including two key components: the WebPlanner (layer) and WebSearcher (executor).
- WebPlanner breaks down a user's question and builds a directed acyclic graph (DAG) to guide the search;
- WebSearcher retrieves and filters valuable information from the Internet to WebPlanner;
- WebPlanner eventually gives its conclusion.
Function List
- Multi-Intelligence Body Framework: Gathering and integrating complex information is achieved by multiple intelligences working in tandem.
- Supports multiple LLMs: Compatible with both closed-source and open-source large language models, users can choose the appropriate model according to their needs.
- Multiple front-end interfaces: Provide React, Gradio, Streamlit and other front-end interfaces for user convenience.
- Deep Knowledge Exploration: Provides extensive and in-depth answers by navigating through hundreds of web pages.
- Transparent solution path: Provide complete content such as thought paths and search terms to increase the credibility and usability of responses.
Using Help
Installation process
- Dependent Installation::
git clone https://github.com/InternLM/MindSearch cd MindSearch pip install -r requirements.txt
- Configuring Environment Variables: Will
.env.example
Rename the file to.env
, and fill in the required values.mv .env.example .env # Open the .env file and add your key and model configurations
- Launch MindSearch API: Start the FastAPI server.
python -m mindsearch.app --lang en --model_format internlm_server --search_engine DuckDuckGoSearch
Parameter Description:
--lang
: the language of the model.en
for English.cn
for Chinese.---model_format
: the format of the model.internlm_server
for InternLM 2.5-7b-chat local server.gpt4
for GPT4.--search_engine
: Search engine, supports DuckDuckGo, Bing, Brave, Google, etc.
- Launch MindSearch front-end: The following front-end interfaces are available:
- React::
cd frontend/React npm install npm start
- Gradio::
python frontend/mindsearch_gradio.py
- Streamlit::
streamlit run frontend/mindsearch_streamlit.py
- React::
Usage Process
- Inquiry questions: Users can enter query questions through the front-end interface, and MindSearch will collect and integrate information through a multi-intelligence framework.
- View Results: MindSearch displays detailed search results for thought paths, search terms, etc. to increase the credibility and usability of the replies.
- Adjustment of search engines: Users can modify the search engine type to suit their needs, for example by switching to the Brave Search API:
BingBrowser(searcher_type='BraveSearch', topk=2, api_key=os.environ.get('BRAVE_API_KEY', 'YOUR BRAVE API'))