Building a CLI Client For Model Context Protocol Servers
Going beyond Claude Desktop
The Model Context Protocol (MCP) keeps gaining traction in the AI space, and since the launch of the Neon MCP Server (~2 weeks ago), the community has built dozens of these servers across a wide spectrum of domains. However, the Claude Desktop app has established itself as the default MCP Client, with most servers having exclusive instructions on how to integrate it with this client.
But MCP is not coupled to Claude Desktop and it can be used with any other LLM client that supports it. With that in mind, we’ve decided to build an MCP CLI client that demonstrates this. This MCP client can be used to test MCP servers much more quickly as well.
How to build an MCP client
All MCP Clients are built with the same core principles and follow the same protocol. For tool usage (our use case), these are the main concepts that need to be implemented:
MCP Server Connection: The first step is to connect to the MCP Server, so that it can discover and use the tools available on the server.
Tool Listing: We need to fetch the available tools from the MCP Server. This allows the LLM to know which tools it can use during our interaction
Tool Usage: Once the LLM has decided which tool to use, we need to call its handler on the MCP Server.
LLM Integration: this is a multi-step process that connects the LLM to the available tools:
Send the initial prompt to the LLM
Wait for the LLM to respond with a tool use
Call the tool handler on the MCP Server
Inject the tool result into the LLM’s context
Send the next prompt to the LLM
And since we are using the Tools API from the Anthropic API, it’s way simpler if we just rely on their official SDK.
Building the CLI Client
Once we have all the core pieces in place, all we need to do is to build a cool CLI client that can be used to interact with the MCP Server.
LLM handling – Handle the LLM messages and tools usage
It’s important that we persist the messages between each interaction, so that we can inject the tool result into the LLM’s context.
2. Chat Loop – Create a chat loop that will be used to send messages to the LLM and handle the response.
3. Entry Point – Setup a main entry point for the client that will initialize the MCP Client, fetch the tools and start the chat loop
4. Run – Start the client
Now that we have built an all-purpose MCP Client, we can run it by passing the MCP Server URL and whatever other arguments it needs.
Improvements
There are 2 main caveats with this simple implementation:
Streaming: This client doesn’t support streaming, so the responses may seem a bit slower from a user perspective.
Multiple Tool Calls: This client doesn’t follow up on multiple tool calls, it will always stop after the first tool call.
Luckily, both of these issues have been solved in the MCP Client CLI that we built at Neon.
Try it
Use this tool with any MCP Server to see how it works or use it as a base to build your own MCP Client. You can check out our GitHub repository, and give us any feedback on our Discord server!
Neon is a serverless Postgres platform that helps teams ships faster via instant provisioning, autoscaling, and database branching. We have a Free Plan – you can get started without a credit card.