Extremely Simple Explanation - How MCP works?
All you need to know about MCP
Before reading about the Model Context Protocol, you can read the following articles on Technomanagers
Everyone talks about AI agents these days.
But very few product teams talk about how these agents actually connect to company data.
To understand why the Model Context Protocol, or MCP is a breakthrough, we first need to look at the main problem in AI development today.
The Integration Bottleneck
Right now, building an AI product requires writing custom code to connect to every single external tool.
If you want your AI to read a map you write an API integration.If you want it to read your database you write another one.
This limits how fast product teams can ship.
Developers spend all their time writing and maintaining this custom code instead of building core features.
Every time an external API changes your team has to fix the connection.
How MCP Changes the Equation?
MCP solves this bottleneck.
It is a new open source standard that unifies how AI agents connect to data sources.
It creates a standard interface between the AI application and external servers.The ecosystem relies on three main parts.
First is the host which is the application the user interacts with like a chat interface or a code assistant.
Second is the protocol itself which acts as the standard transport layer in the middle.
Third are the servers built by data providers like Google or Yahoo.
A single host can connect to many servers at the same time.
When developers build an MCP server it exposes three main capabilities to the client.
First are tools.
These are executable functions like fetching a website or searching a map.
The server provides standard descriptions so the language model knows exactly how to use them.
Second are resources.
These are knowledge bases or plain files in a drive that the AI can read to pull context.
Third are prompts.
These are pre-written templates provided by the server developers, making it easier to interact optimally with specific APIs.
The End to End Workflow
So how does this actually work when a user makes a request?
The process follows a highly structured workflow. It starts with initialisation.
When the app opens, it connects to its configured servers and asks them to list their capabilities. The servers return detailed descriptions of everything they can do.
Then comes the user query.
Imagine you ask your AI assistant for details about a hiking trip.
The host application takes your question and bundles it with the descriptions of all available tools.
It sends this combined information to the language model. The model uses its intelligence to evaluate the request.
It reads the tool descriptions and determines which specific tool to call. It is smart enough to extract the required parameters.
It maps the location to the exact longitude and latitude required by the map tool. The host then calls the appropriate server to execute the operation. The server fetches the data from the API or database and returns the response in a standardised format.
Finally, the model reads this retrieved information and generates the comprehensive answer for the user.
How to Set Up an MCP Server?
Setting up an MCP server is straightforward for developers. You do not need to write complex API wrappers anymore.
If you are using Claude Desktop, you only need to edit one configuration file. On a Mac you navigate to your Library folder —> Application Support —> Claude and open the claude desktop config json file.
On Windows, you look in your AppData folder to find the same file.
You open this file and add a block of text specifying your mcpServers.
Inside this block you tell Claude the specific command to run your local server. You might point it to a local file system or a database connector.
Once you save the file and restart Claude Desktop, it automatically connects. You will see a small plug icon showing the connection is live. If you use a code editor like Cursor, the process is even easier.
You just open Cursor settings and look for the MCP section. You click to add a new server and paste in the command. The editor handles everything else.
This simple setup is exactly why MCP is gaining traction so quickly.
What can be some of the challenges?
This implementation is not going to be easy. There are some very real problems product managers have to solve.
Giving an AI standard access to databases means you need strict security and permissions in place. You have to ensure the AI cannot access sensitive user data it is not supposed to see.
The second challenge is reliability. Standardising responses from hundreds of different APIs requires robust error handling. If an external server fails, the AI needs to fail gracefully instead of breaking the user experience.
MCP has actually lowered the barrier to entry for building complex AI products.
For product managers, lower effort means faster shipping. Faster shipping brings us back to solving core user problems instead of managing infrastructure.
If this framing changed how you think about AI product design, and if you want the full structured path — from AI foundations to RAG, Evals, AI strategy, and interview prep — built specifically for PMs, that is what our course covers.
Highest rated AI PM course · 4.9/5 · 500+ enrollments → See testimonials and course details 60% OFF for a limited time — Code: NYE26
About Author
Shailesh Sharma! I help PMs and business leaders excel in Product, Strategy, and AI using First Principles Thinking. For more, check out my AI Product Management Course, PM Interview Mastery Course, Cracking Strategy, and other Resources



