Open MCP Client: Web-based MCP client to quickly connect to any MCP service.
General Introduction Open MCP Client is an open source tool , the biggest highlight is to provide a web version of MCP (Model Context Protocol) client , allowing users to connect to any MCP server to chat without installation . It also ...
Tip words for generating a simple product requirements document (PRD)
Prompt Words You are a senior product manager and your goal is to create a comprehensive product requirements document (PRD) based on the following instructions. {{PROJECT_DESCRIPTION}} <...
Optimal Text Segment Selection and URL Rearrangement in DeepSearch/DeepResearch
If you have read Jina's last classic article "Design and Implementation of DeepSearch/DeepResearch", then you may want to dig deeper into some details that can greatly improve the quality of answers. This time, we will focus on two details: extracting optimal text from long web pages...
VACE: Open source model for video authoring and editing (not open)
Comprehensive Introduction VACE is an open source project developed by Alitongyi Visual Intelligence Lab (ali-vilab), focusing on video creation and editing. It is an all-in-one tool that integrates a variety of functions, such as generating videos based on references, editing existing video content, localization modifications, and other...
Take a tour of Gemini 2.0 Flash's native image generation and editing capabilities.
In December of last year, Gemini 2.0 Flash was first shown to select testers with its native image output capabilities. Now, developers can experience this new feature in all regions supported by Google AI Studio. Developers can ...
Ollama API User's Guide
Introduction Ollama provides a powerful REST API that enables developers to easily interact with large language models. With the Ollama API, users can send requests and receive responses generated by the model, applied to tasks such as natural language processing, text generation, and so on. This paper will ...
Ollama customization running in GPUs
Windows The following is an example of how to customize Ollama to run in the GPU on a Windows system. Ollama uses the CPU for inference by default. For faster inference, you can configure Ollama to use...
Ollama Custom Model Storage Locations
Take Windows system as an example, the models pulled by Ollama are stored in C disk by default, if you need to pull more than one model, the C disk will be full, which affects the storage space of C disk. Therefore, this section will introduce how to use Ollama in Windows, Linux and Mac...
Ollama Custom Import Model
Introduction This section learns how to use Modelfile to customize the import of models, which is divided into the following sections: Importing from GGUF Importing from Pytorch or Safetensors Importing from Models Directly Importing from Models Customizing Prompt ...
Ollama Installation and Configuration - docker article
Introduction This section learns how to complete the installation and configuration of Ollama in Docker. Docker is a virtualized container technology that is based on images and can start various containers in seconds. Each of these containers is a complete runtime environment that can be realized between containers...