- Published on
Crafting Conversations with LangChain.js and Ollama: A Quickstart Guide
- Authors
- Name
In this blog post, we'll explore how to work with LLMs using Ollama and LangChain.js. We'll start by manually calling the Ollama API using curl
locally and handling streaming responses. Then, we'll move on to using LangChain.js in Node.js to communicate with the Ollama API/model.
Understanding LangChain
LangChain is a powerful framework designed for creating applications driven by language models, offering features like context awareness and reasoning capabilities. This article explores LangChain.js, part of the LangChain framework, and demonstrates how to build a simple application using LangChain.js in conjunction with Ollama models.
Components of LangChain
LangChain comprises various components, including:
- LangChain Libraries: Python and JavaScript libraries containing interfaces, integrations for components, a basic runtime for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- LangChain Templates: A collection of easily deployable reference architectures for various tasks (Python only).
- LangServe: A library for deploying LangChain chains as a REST API (Python only).
- LangSmith: A developer platform for debugging, testing, evaluating, and monitoring chains built on any language model framework.
Prerequisites
Before we delve into building our application, ensure you have the following prerequisites:
- Linux-based system with Nvidia CUDA Support
- Node ≥ 18.16.0
- 16 GB RAM
Part 1: Manually Calling the Ollama API with Streaming Responses
Setting Up the Environment
First, ensure you have Ollama installed on your machine. Ollama allows you to run large language models (LLMs) locally, providing tools and APIs to interact with these models without sending data to third-party services. You can download and install Ollama from the official Ollama website.
Installing Llama 3 Locally
Before proceeding, make sure you have Llama 3 installed locally. You can do this by running the following command:
ollama pull llama3
This command downloads the Llama 3 model, making it available for local use.
Making a Manual API Call
To manually call the Ollama API using curl
, follow these steps:
Start the Ollama server: Ensure the Ollama server is running. You can start it using the following command:
ollama serve
Send a request using
curl
: Usecurl
to send a request to the Ollama API. Here’s an example command to send a prompt to the model and receive a response:curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt":"Why is the sky blue?" }'
Handling Streaming Responses
To handle streaming responses, we can add the "stream" : true
to keep the connection open and receive data as it is generated by the model:
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt":"Why is the sky blue?",
"stream": true
}'
Part 2: Using LangChain.js to Communicate with Ollama API/Model
LangChain.js is a JavaScript library for building applications with LLMs, providing a seamless way to interact with the Ollama API.
Setting Up LangChain.js
First, install LangChain.js in your Node.js project:
mkdir langchain-quickstart
cd langchain-quickstart
npm init -y
npm install @langchain/[email protected] --save-dev
npm install [email protected] --save-dev
npm install dotenv
Using LangChain.js to Call the Ollama API
Here’s an example of how to use LangChain.js in a Node.js application to communicate with the Ollama API:
Using LangChain.js with Prompt Templates
LangChain.js allows you to create prompt templates for more structured interactions. Here’s an example:
- Update the
index.js
file to use a prompt template:
import { Ollama } from '@langchain/community/llms/ollama'
import { PromptTemplate } from '@langchain/core/prompts'
const ollama = new Ollama({
baseUrl: 'http://localhost:11434',
model: 'tinyllama',
})
const promptTemplate = PromptTemplate.fromTemplate('Write a lullaby on this subject: {subject}')
const chain = promptTemplate.pipe(ollama)
const response = await chain.invoke({
subject: 'Butterflies',
})
console.log(response)
- Run the Node.js application:
node index.js
This script generates a poetic response based on the provided subject, showcasing the capabilities of LangChain.js and Ollama.
Conclusion and Next Steps
Congratulations! You’ve successfully built a simple application using LangChain.js and Ollama. Feel free to experiment further by modifying the prompt template and exploring the diverse capabilities of the language model. In the next part of this tutorial, we’ll explore creating a chat template, initiating conversations with LLM, and incorporating memory into the chain.
For more information and detailed documentation, visit the Ollama website and the LangChain.js GitHub repository. Happy coding!