In the rapidly evolving world of AI tooling and agentic workflows, one protocol is reshaping how developers build, scale, and share AI-native applications: the Model Context Protocol (MCP). If you’ve been building AI agents, you know the pain of integration hell: every new tool means custom code, brittle connections, and a monolithic architecture that becomes harder to maintain with each addition.
MCP promises to change all that. Think of it as the USB-C of AI development, finally solving the integration nightmare that has plagued agentic AI systems. At Neo4j, we see MCP not just as a standard but as a catalyst that is bridging agents, APIs, memory and structured data in ways that streamline development and unlock true composability.
What Is MCP and Why Should Developers Care?
At its core, MCP is an open protocol for standardising how language models (LLMs) actively access context. It provides a clean, modular architecture for connecting LLMs to tools, APIs, databases, and memory, removing the friction of bespoke integrations.
Building AI agents has always been about more than just the language model. Any business system needs to integrate with internal and external tools, but traditional approaches create a web of one-to-one integrations that quickly become unmanageable. Picture this: you’re building an AI assistant that needs to access your database, call external APIs, manage infrastructure and on top of it all should maintain conversation memory.
Without MCP, each integration requires custom code, specific authentication handling and ongoing maintenance as services evolve. The result? Poor tool reusability, inability to scale offerings and ultimately lower quality agents for everyone.
MCP has seen rapid adoption across major cloud providers and tooling ecosystems. Since its launch in November 2024, major players like OpenAI, Google, AWS or Microsoft have adopted the protocol. From IDEs like VS Code, Cursor and JetBrains to AI agent frameworks like LangGraph, CrewAI, AWS’ strands, and Google’s Agent Development Kit (ADK), MCP is set to become a foundational layer for AI-native workflows.
The Architecture: Clients, Servers, and Tools
MCP follows a clear client-server pattern that replaces brittle, monolithic integrations:
- MCP Server: Hosts tools (functions or APIs), resources (structured data), and prompts (instruction templates). Think of it as your capability hub that exposes modular capabilities connecting to your data sources and APIs.
- MCP Host: Runs the agent and model (like Claude Desktop, VS Code, or your custom application). It spawns an MCP client for each server, forming a composable runtime with your language model.
- Agent Loop: The model queries the server to determine which tools are available, then plans actions, uses tools and reasons over memory. The LLM picks and selects different tools, resources, and prompts from MCP servers to fulfil whatever task the user posts.
This modular architecture enables secure, scalable and reusable tooling. It’s composable and modular, facilitates rapid developer uptake, and allows users to pick the best client, LLM and MCP servers for their needs, no matter if you’re building a chatbot, coding assistant, or autonomous agent.
Neo4j + MCP: Bringing GraphRAG and Memory to AI Agents
One of the most powerful applications of MCP is agentic memory with knowledge graphs. Traditional AI assistants lose context between sessions, but what if your AI could remember not just facts, but the relationships between them? That’s where knowledge graphs shine, and MCP makes this integration effortless.
Rather than storing flat text blobs or unstructured histories, Neo4j enables persistent, explainable memory as a knowledge graph. As you interact with an agent, as an agent fulfils tasks, and as you learn new things in a conversation, you want to maintain these memories over multiple conversations in a way that’s well-structured, explainable, and available.
Memory is just one example of how data structured as a knowledge graph can support AI applications - with GraphRAG you make your connected, relevant context for domains like transport, biotech, retail and cybersecurity available through tools.
Meet the Neo4j MCP Servers
This is how you build persistent agent memory with knowledge graphs using two MCP servers working in harmony:
- MCP Neo4j Memory Server: Stores information from conversations as a knowledge graph as concrete memory entities with observations connected with relevant relationships - extracting structured knowledge from conversations or web content
- MCP Neo4j Cypher Server: Provides generic database access for schema retrieval and Cypher query execution to further analyse and enhance the stored graph data.
The magic happens when multiple team members contribute to a shared knowledge base. As one developer extracts information from Anthropic’s Claude 4 announcement, storing entities like “Claude Sonnet 4,” “Anthropic,” and “GitHub Copilot” as graph nodes with meaningful relationships, a second team member adds information about Microsoft Build announcements.
The system doesn’t just store isolated facts, it connects the dots, linking Microsoft’s MCP steering committee participation to Anthropic’s original announcement, creating a web of AI industry insights that grows smarter with each contribution. This isn’t just a logging tool; it’s collaborative knowledge engineering.
Persistent, Shared Intelligence
Using a shared Neo4j AuraDB database, team members could collectively build an ever-growing knowledge base. Unlike ephemeral in-session memory in tools like ChatGPT, graph memory sticks and grows. Developers can recall facts across sessions and share memory across individuals or teams via a shared graph.
The result? An AI assistant that could generate a comprehensive news feed from this connected knowledge, demonstrating how MCP enables not just data retrieval, but intelligent synthesis of information across sources and conversations.
From Memory to Action: Integrating Neo4j with Agent SDKs
MCP’s real power emerges when integrated with agent development frameworks. Google’s Agent Development Kit (ADK) lets you connect MCP tools seamlessly: A simple 20-line agent gains access to Neo4j’s graph database, showcasing how the protocol reduces friction in agent development…
root_agent = Agent(
name="memory_analytics_agent",
model = "gemini-2.5-flash",
description="""
Agent to access a database with conversational and fact memories as
Memory entities with observations and relationships stored in a
knowledge graph. To analyse the memories with plain Neo4j cypher to
determine insights about the accumulated memories.
""",
instruction="""
You are an assistant with access to a graph database with connected
memory entities. Generate and execute analytical Cypher queries
based on the schema to read memory entities and their relationships
and generate sensible summaries if asked for.
""",
tools=[MCPToolset(
connection_params=StdioServerParameters(
command='uvx',
args=[
"mcp-neo4j-cypher",
],
env={ k: os.environ[k] for k in
["NEO4J_URI","NEO4J_USERNAME","NEO4J_PASSWORD"] }
),
tool_filter=['get_neo4j_schema','read_neo4j_cypher']
)]
)
This highlights a key advantage of MCP: minimal setup, maximum utility. With local or hosted servers, developers can expose existing APIs and data stores to their agents using FastMCP (a lightweight SDK for MCP server development in Python, others in TypeScript, Go, or Java). This pattern extends across the ecosystem, with LangGraph, CrewAI, and other frameworks integrating MCP support, allowing developers to mix and match tools from different providers without vendor lock-in.
MCP in Production: Beyond the Hype
Despite its rapid rise and tens of thousands of MCP servers being built, MCP is still early and key challenges remain for production deployments:
- Security remains the biggest challenge. Access control, OAuth flows, credential management, and safeguarding from LLM injection vulnerabilities are still evolving. The June 2025 specification update extended the OAuth2 support while separating concerns (between IDP and resource server), but developers must still implement proper access controls.
In general terms in AI applications, a clearer separation of data vs. executable instructions is needed to prevent injection risks. - Discovery and registries need maturation. Discovering and trusting third-party MCP tools need better package management (recently started with dxt packages for Claude Desktop), versioning, and quality control. With thousands of available MCP servers, choosing trustworthy implementations becomes critical. Stick to servers from trusted vendors, open-source projects you’ve audited, or build your own.
- Infrastructure architecture is shifting. Most MCP servers today run locally via STDIO, but MCP clients often maintain stateful connections. Future architectures may shift toward stateless, scalable designs as the ecosystem moves to hosted HTTP services.
- Quality control is crucial. Few of the existing MCP servers are production-grade, and proper monitoring, request limiting, and scalable deployments are essential for public services.
Developer Experience: From Prototype to Production
Building MCP servers is surprisingly straightforward with modern SDKs. Testing is equally developer-friendly: Anthropic’s MCP Inspector provides a web interface for connecting to and testing servers locally, validating against the protocol specification and reporting violations.
The roadmap for MCP addresses current limitations with ambitious goals:
- Remote connectivity with proper OAuth 2.0 security and service discovery
- Developer resources including reference implementations and streamlined feature proposals
- Deployment infrastructure with standardised packaging, server registries, and sandboxing
- Agent capabilities supporting hierarchical systems, interactive and long-running workflows, and real-time streaming
- Ecosystem expansion pursuing formal standardisation across AI providers
Why This Matters: The Mental Model Shift
MCP isn’t just a protocol; it represents a fundamental mental model shift. As we move from prompt hacking to system engineering, the ability to treat tools, memory, and data as modular, composable components becomes essential.
It is the foundation for a new generation of AI applications. With knowledge graphs providing the intelligent memory and data layers, the future of AI agents looks more connected and capable than ever.
With Neo4j, developers can now give agents long-term, structured memory and empower them to reason over complex, connected business knowledge graphs. Whether you’re building assistants, copilots, or autonomous agents, graphs plus MCP open new possibilities for intelligent, contextual systems that can seamlessly integrate with the complex, interconnected systems that power modern businesses.
Getting Started Today
Ready to experiment with MCP and Neo4j? Start with Neo4j’s open-source MCP servers on GitHub. The memory server demonstrates persistent agent knowledge, while the Cypher server provides flexible graph database access. Other servers, like the modelling or infrastructure ones offer additional capabilities.
For broader MCP exploration, check out the thousands of available servers on registries like Smithery, pipedream, mcp.so, or the official MCP servers repository. Remember to prioritise security and stick to trusted sources as you build your agent ecosystem.
Neo4j continues innovating in this space, building sophisticated MCP servers that demonstrate the protocol’s potential for knowledge graph integration. As the ecosystem matures, we’re likely to see MCP become the standard protocol for AI agent tool integration.
Here are some resources to dive deeper:
- Model Context Protocol (MCP) Integrations for Neo4j
- Neo4j MCP Servers
- GraphAcademy: GraphRAG Courses
- GraphAcademy: MCP Course
- Google MCP Toolbox
Michael Hunger, Head of Product Innovation & Developer Strategy, Neo4j
Michael Hunger has been passionate about software development for over 30 years. For the past 12 years, he’s played a pivotal role at Neo4j, the open source graph database, where he has led product innovation and developer relations. As a key steward of the Neo4j community and ecosystem, he thrives on collaborating with users, contributors, and graph-driven projects.
A lifelong learner and active developer, Michael enjoys exploring programming languages, contributing to open source, and writing books and articles on software. He’s a seasoned speaker at global tech conferences and an organiser behind several of them, achievements that earned him a place in the JavaChampions program.