Persistent, semantically-recalled memory for CrewAI agents. Your crews remember everything — across sessions, across restarts. Dakera handles embedding, storage, and retrieval server-side.
crewai-dakera
Dakera is a self-hosted memory server. Spin it up with Docker:
docker run -d \ --name dakera \ -p 3300:3300 \ -e DAKERA_ROOT_API_KEY=dk-mykey \ ghcr.io/dakera-ai/dakera:latest # Verify it's running curl http://localhost:3300/health
# Core + integration pip install crewai-dakera # With CrewAI (if not already installed) pip install "crewai-dakera[crewai]"
Requirements: Python ≥ 3.10, a running Dakera server.
from crewai import Crew, Agent, Task from crewai.memory import LongTermMemory from crewai_dakera import DakeraStorage storage = DakeraStorage( api_url="http://localhost:3300", api_key="dk-mykey", agent_id="my-crew", ) crew = Crew( agents=[...], tasks=[...], memory=True, long_term_memory=LongTermMemory(storage=storage), ) result = crew.kickoff(inputs={"topic": "AI trends"})
Your crew now persists everything it learns across runs.
api_url
http://localhost:3300
api_key
""
DAKERA_ROOT_API_KEY
agent_id
min_importance
0.0
top_k
5
import os from crewai_dakera import DakeraStorage storage = DakeraStorage( api_url=os.environ["DAKERA_URL"], api_key=os.environ["DAKERA_API_KEY"], agent_id="research-crew", )
from crewai import Agent, Task, Crew, Process from crewai.memory import LongTermMemory from crewai_dakera import DakeraStorage dakera = DakeraStorage( api_url="http://localhost:3300", api_key="dk-mykey", agent_id="research-crew", ) researcher = Agent( role="Senior Researcher", goal="Uncover groundbreaking insights in {topic}", backstory="An expert researcher with decades of experience.", verbose=True, ) writer = Agent( role="Content Writer", goal="Craft compelling reports based on research findings", backstory="A skilled writer who turns complex ideas into clear prose.", verbose=True, ) research_task = Task( description="Research the latest developments in {topic}", expected_output="A detailed research report", agent=researcher, ) write_task = Task( description="Write a blog post based on the research", expected_output="A polished 500-word article", agent=writer, ) crew = Crew( agents=[researcher, writer], tasks=[research_task, write_task], process=Process.sequential, memory=True, long_term_memory=LongTermMemory(storage=dakera), verbose=True, ) # First run — learns and stores findings result = crew.kickoff(inputs={"topic": "quantum computing"}) print(result.raw) # Second run — recalls prior research automatically result = crew.kickoff(inputs={"topic": "quantum computing advances"}) print(result.raw)
DakeraStorage.save()
DakeraStorage.search()
Memory and vector store for chains
Memory store and vector index
Memory for multi-agent teams
TypeScript integration