AI Personal Learning
and practical guidance
CyberKnife Drawing Mirror

Ollama in LangChain - JavaScript Integration

Post was updated on 2025-03-14 00:03, Part of the content is time-sensitive.

summary

This document describes how to use the Ollama Integrate with LangChain to create powerful AI applications.Ollama is an open source deployment tool for large language models, and LangChain is a framework for building language model-based applications. By combining the two, we can quickly deploy and use advanced AI models in a local environment.

Note: This document contains core code snippets and detailed explanations. The complete code can be found in notebook/c5/ollama_langchain_javascript.

 

1. Environmental settings

Configuring the Node.js environment

First, make sure you have Node.js installed on your system; you can download and install the latest version from the Node.js website.

Create a project and install dependencies

  1. Switch to the run directory to run it:
cd notebook/C5/ollama_langchain_javascript
npm init -y
  1. Install the necessary dependencies:
npm install @langchain/ollama @langchain/core @langchain/community zod
  1. exist package.json file by adding the "type": "module" to enable ES module support:
{
"type": "module", // ...
// ... Other Configurations
}

 

2. Download the required model and initialize Ollama

Download llama3.1 model

  1. Go to the official website https://ollama.com/download to download and install Ollama on available supported platforms.
  2. Check out https://ollama.ai/library for all the available models.
  3. pass (a bill or inspection etc) ollama pull  command to get the available LLM models (for example:ollama pull llama3.1).

The command line is executed as shown in the figure:

Ollama in LangChain - JavaScript Integration-1


Model storage location:

  • Mac. ~/.ollama/models/
  • Linux (or WSL). /usr/share/ollama/.ollama/models
  • Windows. C:\Users\Administrator\.ollama\models

Once the download is complete, you need to make sure that the Ollama service is started:

ollama ps

Ollama in LangChain - JavaScript Integration-2

 

3. Examples of basic use

Simple conversations with ChatOllama

runnable base_chat.js file, the specific code is as follows:

import { Ollama } from "@langchain/community/llms/ollama" ;
const ollama = new Ollama({
baseUrl: "http://localhost:11434", // Make sure the Ollama service is started.
model: "ollama3.1", // replace with the actual model used
}); const stream = await ollama.
const stream = await ollama.stream(
`Are you better than GPT4? `
);
const chunks = [];
for await (const chunk of stream) {
chunks.push(chunk);
}
console.log(chunks.join(""));

Run the code:

node base_chat.js

This code does several things:

  1. Import the Ollama class and initialize it, specifying the model and base URL.
  2. utilization stream method sends a question to the model, which allows us to receive the response block by block.
  3. Use a for-await loop to collect all response blocks.
  4. Combine all blocks and print the full response.

Multimodal model use

(of a computer) runbase_multimodal.jsfile, the specific code is as follows:

import { Ollama } from "@langchain/community/llms/ollama" ;
import * as fs from "node:fs/promises";
const imageData = await fs.readFile("... /... /... /docs/images/C5-1-4.png"); // can be replaced with the image you want to ask for
const model = new Ollama({
model: "llava",
baseUrl: "http://127.0.0.1:11434", }).
}).bind({
images: [imageData.toString("base64")], }); { images: [imageData.toString("base64")], {
}); Const res = await model.
const res = await model.invoke("What animal is in the image?") console.log({ res })
console.log({ res });

Run the code:

node base_multimodal.js

This code demonstrates how to process image and text inputs using a multimodal model such as llava:

  1. Reads an image file and converts it to base64 encoding.
  2. Initialize the Ollama model and use the bind method binds the image data to the model.
  3. utilization invoke method sends a question about an image.
  4. Print the response of the model.

Tool Call

(of a computer) run base_tool.js file with the following code:

import { tool } from "@langchain/core/tools" ;
import { ChatOllama } from "@langchain/ollama".
import { z } from "zod";
// Define the simple calculator tool
const simpleCalculatorTool = tool((args) => {
const { operation, x, y } = args; switch (operation) { operation, x, y } = args.
switch (operation) {
case "add".
return x + y;
case "subtract".
return x - y; case "multiply": return x + y
case "multiply".
return x * y; case "divide": return x - y; case "multiply".
case "divide": if (y !
if (y ! == 0) {
return x / y; } else { if (y ! == 0)
} else {
throw new Error("Cannot divide by zero"); }
}
default.
throw new Error("Invalid operation"); } default.
}
}, {
name: "simple_calculator", description: "Perform simple arithmetic operations", {
description: "Perform simple arithmetic operations", { name: "simple_calculator", description: "Perform simple arithmetic operations", } }
schema: z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]), x: z.number(), z.number(), z.number(), z.number(), z.number()
x: z.number(),
y: z.number(),
})
}).
// Define the model
const llm = new ChatOllama({
model: "llama3.1",
temperature: 0, }); // Define the model.
}); // Bind the tool to the model.
// Bind tools to the model
const llmWithTools = llm.bindTools([simpleCalculatorTool]);
// Use the model for tool calls
const result = await llmWithTools.invoke(
"Do you know what 10 million times two is? Please use the 'simple_calculator' tool to calculate it."
);
console.log(result);

Run the code:

node base_tool.js

This code shows how to define and use tools:

  1. utilization tool function defines a simple calculator tool, including the operation logic and parameter schema.
  2. Initialize the ChatOllama model.
  3. utilization bindTools method binds the tool to the model.
  4. utilization invoke method sends a problem that needs to be computed and the model automatically calls the appropriate tool.

 

4. Advanced usage

Customized prompt templates

Customized prompt templates not only improve the efficiency of content generation, but also ensure consistency and relevance of the output. With well-designed templates, we can fully utilize the capabilities of the AI model while maintaining control and guidance over the output content:

import { ChatOllama } from "@langchain/ollama" ;
import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate } from "@langchain/core/prompts";
// Initialize the ChatOllama model
const model = new ChatOllama({
model: "llama3.1",
temperature: 0.7, }); // Initialize ChatOllama model.
}); const systemMessageContent
const systemMessageContent = `
You are an experienced e-commerce copywriting expert. Your task is to create engaging item descriptions based on the given product information.
Make sure your description is concise, powerful, and highlights the core benefits of the product.
`;
const humanMessageTemplate = `
Please create an engaging product description for the following product:
Product type: {product_type}
Core features: {key_feature}
Target audience: {target_audience}
Price range: {price_range}
Brand positioning: {brand_positioning}
Please provide a description of each of the following three different styles in approximately 50 words each:
1. Rational analysis
2. emotional appeal
3. Storytelling
`; const prompt = ChatPrompt
const prompt = ChatPromptTemplate.fromMessages([
SystemMessagePromptTemplate.fromTemplate(systemMessageContent),
HumanMessagePromptTemplate.fromTemplate(humanMessageTemplate),
]); const chain = prompt.pipe(model), humanMessagePromptTemplate.
const chain = prompt.pipe(model);
async function generateProductDescriptions(productInfo) {
const response = await chain.invoke(productInfo); async function generateProductDescriptions(productInfo) {
return response.content; }
}
// Example usage
const productInfo = {
product_type: "Smartwatch",
key_feature: "Heart rate monitoring and sleep analytics",
target_audience: "Health-conscious young professionals",

brand_positioning: "The perfect combination of technology and health"
};
generateProductDescriptions(productInfo)
.then((result) => console.log(result))
.catch((error) => console.error("Error:", error));

Run the code:

node advanced_prompt.js

This code shows how to create and use a custom prompt template:

  1. Define system message and human message templates.
  2. utilization ChatPromptTemplate.fromMessages Create a complete prompt template.
  3. utilization pipe method connects the cue template to the model, creating a processing chain.
  4. Define a function to generate a product description that uses a processing chain to process the input product information.

Custom prompt templates have a wide range of uses in practice, especially when you need to generate content in a specific format or style. Below are some practical application scenarios:

  1. E-commerce Product Description Generation: As shown in this example, it can be used to automatically generate different styles of product descriptions to improve the attractiveness and conversion rate of product pages.
  2. Customer Service Response Templates: You can create response templates for various scenarios, such as handling complaints, providing product information, etc., to ensure the consistency and professionalism of customer service responses.
  3. News Report Generation: Templates can be designed to generate different types of news reports, such as breaking news, in-depth analysis, etc., to help journalists quickly draft their first drafts.
  4. Personalized marketing emails: Generate personalized marketing email content based on customer data and marketing objectives to improve the effectiveness of email marketing.

Advanced JSON Output and Knowledge Graph Generation

In this example, we show how Ollama and LangChain can be used to generate structured JSON output, especially for creating knowledge graphs. This approach is closely related to Microsoft's open source project GraphRAG, especially in terms of automated knowledge extraction and ternary generation.

import { ChatOllama } from "@langchain/ollama" ;
import { PromptTemplate } from "@langchain/core/prompts";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
const systemTemplate = `
You are an expert in the medical field and specialize in creating knowledge graphs. Please format all responses as JSON objects with the following structure:
{
"node": [
{ "id": "string", "label": "string", "type": "string"}
],
"relationship": [
{ "source": "string", "target": "string", "relationship": "string"}
]
}
Ensure that all node ids are unique and that the relationship references a pre-existing node id.
`;
const humanTemplate = `
Please create a knowledge graph for the medical topic "{topic}". Include the following related concepts: {concepts}.
Provide at least 5 nodes and 5 relationships. Make sure to answer in Chinese.
`;
const systemMessage = new SystemMessage(systemTemplate);
const humanPrompt = PromptTemplate.fromTemplate(humanTemplate);
const llmJsonMode = new ChatOllama({
baseUrl: "http://localhost:11434", // default value
model: "llama3.1", format: "json", // default value
model: "llama3.1", format: "json", // default
}).
async function generateMedicalKnowledgeGraph(topic, concepts) {
try {
const humanMessageContent = await humanPrompt.format({
topic: topic, concepts: concepts.format({
concepts: concepts.join(","), }); humanMessageContent = await humanPrompt.format({ topic: topic, concepts: concepts.join(","), }
});
const humanMessage = new HumanMessage(humanMessageContent);
const messages = [systemMessage, humanMessage]; const result = await llmJson(humanMessageContent)
const result = await llmJsonMode.call(messages); console.log(JSONMode.string)
console.log(JSON.stringify(result, null, 2)); return result; const
return result; } catch (error) { { JSON.stringify(result, null, 2)); }
} catch (error) {
console.error("Error generating knowledge graph:", error); }
}
}
// Example usage

const concepts = ["insulin", "blood glucose", "complications", "dietary management", "exercise therapy"]; const concepts = ["insulin", "blood glucose", "complications", "dietary management", "exercise therapy"]; }
generateMedicalKnowledgeGraph(topic, concepts).

Run the code:

node advanced_json.js

This code demonstrates how to generate structured JSON output using Ollama:

  1. Defines a system template that specifies the desired JSON structure.
  2. Create a human prompt template for generating requests for the Knowledge Graph.
  3. Initialize the ChatOllama model, set the format: "json" to get the JSON output.
  4. Define a function to generate a medical knowledge graph that combines system messages and human messages and calls the model.

We can get a glimpse of the many ways in which this can be utilized through specific outputs in the json format:

  1. Automated ternary generation : In traditional approaches, the creation of knowledge graphs usually requires extensive manual annotation work. Experts need to carefully read documents, identify important concepts and their relationships, and then manually create triples (subject-relationship-object). This process is not only time-consuming but also error-prone, especially when dealing with a large number of documents. Using our approach, the Big Language Model can automatically generate relevant nodes and relations from given subjects and concepts. This greatly accelerates the process of knowledge graph construction.
  2. data enhancement : This approach can be used not only for the direct generation of knowledge graphs, but also for data enhancement. For example:
    • Extending the existing training dataset: by having the model generate new relevant triples based on existing triples.
    • Creating diverse examples: generating knowledge graphs for different medical conditions adds diversity to the data.
    • Cross-language data generation: by adjusting the prompts, knowledge graphs in different languages can be generated to support multilingual applications.
  3. Improving data quality : The Big Language Model can be leveraged to generate high-quality, coherent knowledge graphs that capitalize on its broad knowledge base. With well-designed prompts, we can ensure that the generated data meets specific quality standards and domain specifications.
  4. Flexibility and scalability : This approach is very flexible and can be easily adapted to different areas and needs:
    • By modifying the system prompts, we can change the JSON structure of the output to accommodate different knowledge graph formats.
    • It is easy to extend this approach to other areas such as technology, finance, education, etc.

 

reach a verdict

With these examples, we show how to use Ollama and LangChain to build a variety of AI applications in a JavaScript environment, from simple dialog systems to complex knowledge graph generation. These tools and techniques provide a solid foundation for developing powerful AI applications.

The combination of Ollama and LangChain provides developers with great flexibility and possibilities. You can choose the right models and components according to your specific needs and build an AI system that suits your application scenario.

As the technology continues to evolve, we expect to see more innovative applications emerge. Hopefully, this guide will help you get started on your AI development journey and inspire your creativity to explore the endless possibilities of AI technology.

CDN1
May not be reproduced without permission:Chief AI Sharing Circle " Ollama in LangChain - JavaScript Integration

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish