Persistent semantic memory and server-side vector search for LangChain.js. Full TypeScript types — no local embedding model required.
@dakera-ai/langchain
DakeraMemory
BaseMemory
DakeraVectorStore
VectorStore
docker run -d \ --name dakera \ -p 3300:3300 \ -e DAKERA_ROOT_API_KEY=dk-mykey \ ghcr.io/dakera-ai/dakera:latest curl http://localhost:3300/health # → {"status":"ok"}
npm install @dakera-ai/langchain @dakera-ai/dakera @langchain/core
Requirements: Node.js ≥ 20, a running Dakera server.
import { DakeraMemory } from "@dakera-ai/langchain"; import { ConversationChain } from "langchain/chains"; import { ChatOpenAI } from "@langchain/openai"; const memory = new DakeraMemory({ apiUrl: "http://localhost:3300", apiKey: "dk-mykey", agentId: "my-agent", }); const chain = new ConversationChain({ llm: new ChatOpenAI({ model: "gpt-4o" }), memory, }); // Memory persists across sessions and restarts const response = await chain.call({ input: "My project is called NeuralBridge." }); console.log(response.response);
Persistent conversation memory for LangChain.js chains. Stores and recalls conversation history using Dakera's hybrid search (BM25 + vector).
import { DakeraMemory } from "@dakera-ai/langchain"; import { ConversationChain } from "langchain/chains"; import { ChatOpenAI } from "@langchain/openai"; const memory = new DakeraMemory({ apiUrl: "http://localhost:3300", apiKey: process.env.DAKERA_API_KEY!, agentId: "my-agent", recallK: 5, // how many past memories to surface per turn importance: 0.7, // importance score for stored memories }); const chain = new ConversationChain({ llm: new ChatOpenAI({ model: "gpt-4o" }), memory, }); // First session await chain.call({ input: "My name is Alice and I'm building a chatbot." }); // Later session — memory persists across restarts const { response } = await chain.call({ input: "What was I building?" }); console.log(response); // "You mentioned you were building a chatbot."
apiUrl
http://localhost:3300
apiKey
""
agentId
recallK
5
importance
0.7
minImportance
0.0
Server-side embedded vector store for RAG. Compatible with VectorStore from @langchain/core. Dakera handles all embeddings — no OpenAI embeddings API needed.
@langchain/core
import { DakeraVectorStore } from "@dakera-ai/langchain"; const vectorStore = new DakeraVectorStore({ apiUrl: "http://localhost:3300", apiKey: process.env.DAKERA_API_KEY!, namespace: "my-docs", }); // Index documents (server handles embedding) await vectorStore.addDocuments([ { pageContent: "Dakera is a self-hosted memory server.", metadata: {} }, { pageContent: "It scores 87.6% on the LoCoMo benchmark.", metadata: {} }, ]); // Similarity search const results = await vectorStore.similaritySearch("benchmark score", 3); console.log(results);
import { DakeraVectorStore } from "@dakera-ai/langchain"; import { RetrievalQAChain } from "langchain/chains"; import { ChatOpenAI } from "@langchain/openai"; const vectorStore = new DakeraVectorStore({ apiUrl: "http://localhost:3300", apiKey: process.env.DAKERA_API_KEY!, namespace: "product-docs", }); const chain = RetrievalQAChain.fromLLM( new ChatOpenAI({ model: "gpt-4o" }), vectorStore.asRetriever({ k: 4 }), ); const { text } = await chain.call({ query: "How does memory decay work?" }); console.log(text);
namespace
embeddingModel
Python LangChain integration
Long-term memory for crews
Memory store and vector index
Memory for multi-agent teams