AI Personal Learning
and practical guidance
CyberKnife Drawing Mirror

LangChain Team on MCP: A New Direction for AI Agent Tool Extension?

Recently, the Model Context Protocol (MCP) has generated a lot of interest in the AI space. This technology aims to solve a central problem:How do you allow users to extend tooling capabilities for the underlying Agent without controlling it? Around the practical utility of MCP, LangChain CEO Harrison Chase spoke with LangGraph Nuno Campos, the head of the organization, discusses the topic in depth.

LangChain Team on MCP: A New Direction for AI Agent Tool Extension? -1


 

The Core Value of MCP: Extending Tools for Uncontrollable Agents

Harrison Chase sees the value of MCP in its ability to provide users with a way to add tools to Agents they don't have direct control over. He cites Claude Desktop, Cursor and Windsurf As an example, it was pointed out that users cannot directly modify the underlying Agent logic when using these products, and the tools available to the Agent are limited to a few built-in ones.

However, users may want to add additional tools to these Agents to meet more individualized needs. For example, you may want to add additional tools to the code editor Cursor Integrate a specific code analysis tool or add a custom knowledge base access tool to Claude Desktop. In order to do this, a common protocol is needed that allows the Agent to recognize and invoke these external tools, and MCP was created to solve this problem.

Harrison further noted that MCP is also important for non-developers building Agents. As Agent building becomes more popular, more and more domain experts want to be involved in the Agent creation process. These experts may not have deep programming skills, but they do have extensive domain knowledge and specific tooling needs, and MCP lowers the barriers to Agent construction by allowing them to configure the tools they need without having to modify the Agent's core logic.

Harrison recognized the potential value of the MCP in filling an important gap in the existing Agent tool ecosystem. In the rapidly evolving world of Agent technology, there is a growing demand for personalized Agent functionality. If MCP can effectively reduce the complexity of tool integration, it will undoubtedly accelerate the popularization of Agent technology and give rise to more innovative application scenarios. Especially for non-developers, a more friendly way to extend tools will greatly release their creativity and promote the democratization of AI applications.

 

Utility Challenge: Agent Customization and Tool Integration

Nuno Campos questioned the usefulness of MCP. He argues that the design of an Agent needs to be closely integrated with the tools used. Simply adding tools to an agent without adjusting the agent's system hints or even its architecture is often difficult to achieve the desired results.

Nuno admits that MCP might work if users simply want to replace the Web search tools built into apps like Windsurf. But that's not the most valuable use case for MCP, he argues. The really compelling use case would be for users to inject a "magic tool" that would give app developers like Cursor new capabilities that they hadn't even envisioned. In practice, however, this is very unlikely to happen.

Nuno emphasized that in most production environments, in order to ensure that an Agent is able to effectively utilize the tools, the Agent's system messages, and even the overall architecture, must be finely tuned to match the available toolset.

Nuno's view is more technically pragmatic. He points out that tool integration is not simply "plug-and-play", and that the performance of an Agent depends largely on how well it works with the tool. This, in fact, points to a common challenge in the development of current AI agent technology: how to strike a balance between high flexibility in tool scaling and optimizing agent performance. Nuno's concern is not an idle one, as many developers have experienced the importance of prompt engineering when working with large language models, and the profound impact that system architecture can have on the final result.

 

Trade-offs between reliability and user expectations

Harrison recognizes that an Agent based on the MCP Integration Tool may not have the reliability of the 99%. However, he believes that even if the Agent's reliability is a little less than that, it may still be of practical value. He points out that while tool descriptions and instructions are important, the following points should not be overlooked:

  1. MCP contains tool definitions, and good MCP servers can provide better tool descriptions than users can write themselves.
  2. The MCP allows for the inclusion of prompts that can be used by the user to instruct the Agent on how to use the tool.
  3. As the capabilities of the underlying model continue to improve, the out-of-the-box performance of the tool invoking the Agent will get better and better.

Harrison believes that while you can't build a product as complete as Cursor with MCP integrations and generic tool call agents alone, MCP can still be valuable in certain scenarios, such as building internal or personal agents.

In response to Nuno's skepticism, Harrison was more optimistic. He recognizes that MCP may not be perfect in all scenarios, but he emphasizes the pragmatic principle of "just enough is enough". In the early stages of technology development, striving for perfection often limits innovation. Harrison's view is also consistent with the iterative nature of technology, where a usable version is released quickly and then improved in practice. In addition, his confidence in the improvement of model capability also reflects a general consensus in the current AI field: the continuous improvement of model capability will continue to expand the boundaries of Agent applications.

 

Synchronization of model capabilities with user expectations

Nuno countered that LangGraph's tool invocation benchmarks show that even with an Agent whose architecture and prompts are tailored to a specific toolset, the current model only has a success rate of about 50% when invoking the correct tool. A personal Agent that doesn't work correctly half the time is of dubious utility.

Nuno recognizes that modeling capabilities will continue to increase, but so will user expectations. He quotes Jeff Bezos: "Customers are always dissatisfied with the status quo, and their expectations are never-ending." If developers have mastered the entire tech stack, including UI, hints, architecture, and tools, they may be able to meet users' growing expectations. Otherwise, the future looks bleak.

Nuno went further with the data, pointing out the limitations of the current model in terms of tool invocation. The success rate of 50% is certainly a worrying figure, especially in a production environment where efficiency and reliability are sought. At the same time, Nuno also raised the bar in terms of user expectations. Technological advances must not only improve capabilities, but also keep pace with growing user expectations. This actually sets a higher standard for MCP and all AI Agent technologies: not only should they work, but they should work well, and they should be able to continue to meet the growing needs of users.

 

The Long Tail Effect and the Zapier Analogy

Harrison remains confident that the model's capabilities will improve. He believes that whatever the current success rate of Agents, it will only continue to improve in the future. He emphasizes that the value of an MCP should not be evaluated by comparing it to a well-polished Agent. The real value of an MCP is its ability to enable a large number of long-tail connections and integrations.

Harrison compares MCP to Zapier, which connects applications such as email, Google Sheets, and Slack, and allows users to create a myriad of workflows without having to develop a sophisticated agent for each one. With MCP, users can create their own version of Zapier, allowing for a variety of personalized integrations. personalized integrations.

Harrison cleverly shifted the positioning of MCP from a "high-performance general-purpose agent tooling platform" to a "connector for long-tail scenarios. Zapier's analogy is apt, pointing out that the potential application of MCP is not to replace existing Agent solutions, but rather to leverage its value in a wider range of more personalized, long-tail requirements. This shift in thinking actually reduces the requirements for MCP technology maturity and makes it easier to find applications in the short term. The long tail theory has been repeatedly verified in the Internet field, and if MCP is able to seize the long tail demand, it is also likely to be successful.

 

Differences from the LangChain tool

Nuno pointed out that LangChain already has a library of 500 tools, but they are not used very often in production environments. These tools are all implemented according to the same protocol, compatible with any model, and are freely interchangeable. He questioned what the advantage of MCP was. Is it simply that MCP takes a "unique form" that requires the user to run a large number of servers on a local terminal and is only compatible with desktop applications? This was not an advantage in his view. He believes that Zapier may be the upper limit of MCP's potential.

The difference between the LangChain tool and the MCP tool, according to Harrison, is that the LangChain tool is primarily geared toward Agent developers, whereas the MCP is primarily geared towardincapableUsers who develop Agents. The goal of MCP is to provide a way for users to add tools to Agents over which they have no control. In addition, MCP also enables non-developers to add tools to the Agents they use, whereas the LangChain tools are more focused on developers. Non-developers far outnumber developers, which is the potential market for MCP.

Harrison also recognizes the shortcomings of the MCP in its current form. But he believes that MCP will continue to improve. He envisions a future where MCP applications can be installed with a single click, do not need to run a server on the local terminal, and can be accessed through web applications. That's where MCP is headed.

Nuno questioned the need for MCP from the perspective of LangChain's own tooling ecosystem. His question is straightforward: if LangChain already provides a large number of tools that are underutilized, how can MCP solve this problem? Harrison responded by differentiating the user base, arguing that MCP targets a different set of users than LangChain's tools. This strategy of differentiating between user groups helps to more accurately target the MCP market and avoid direct competition with the existing ecosystem of tools. The group of "non-developers" is indeed very large, and if MCP can effectively serve this group of users, the market potential is still considerable.

 

The Future of MCP: Analogies to Custom GPTs and Plugins

Nuno summarizes Harrison's point that MCP needs to become more like OpenAI's Custom GPTs to justify the current hype. However, Custom GPTs are not as popular as they could be. He asked rhetorically, what are Custom GPTs missing that MCP has?

Harrison sees MCP as more like the Plugins that OpenAI used to put out but ultimately failed. He admits that his experience with Plugins is fuzzy, but he thinks:

  • The MCP ecosystem is already much larger than the Plugins ecosystem.
  • The capacity of the model has been significantly improved to better utilize these tools.

Nuno is skeptical about the size of the MCP ecosystem. He found only 893 MCP servers in a random directory. He thinks Harrison may be judging the size of the ecosystem simply by the number of tweets on the Twitter timeline that mention MCP.

Nuno believes that if MCP is to move beyond being a footnote in the history of AI development, the following improvements must be made:

  • Reducing complexity: Why does the tooling protocol need to handle both prompts and LLM completion?
  • Simplifying the difficulty of realization: Why do protocols for service tools need to communicate in both directions? Nuno believes that receiving server logs is not a sufficient reason.
  • Support for server deployment: Stateless protocols are key, and best practices for online scaling should not be forgotten just because you are building an LLM application. Once server deployments are supported, other issues such as authentication come into play.
  • Making up for quality losses: Inserting randomized tools into Agents that know nothing about them will inevitably result in a loss of quality, and ways need to be found to compensate.

Harrison acknowledged that Nuno's skepticism had some merit, and threw the question back to the Twitter community, launching a poll asking whether people thought MCP was a flash in the pan or the standard of the future.

In summary, Model Context Protocol (MCP) is an emerging technology that attempts to break new ground in the scalability of agent tools. Although MCP still faces many challenges, its potential value and future direction are still worth paying attention to.

 

point of view

Model Context Protocol (MCP) is unlikely to be the standard of the future. Personally, I am pessimistic about the future of MCP.

The problem that MCP is trying to solve makes sense, but it may not be very effective in practice. Harrison Chase's idea that MCP will help users extend Agent tools is well intentioned, but users may not want it. Users may prefer to just use a well-developed product rather than adding tools themselves.

Nuno Campos has a point. He pointed out that tools and agents need to work well together to be effective. The MCP protocol may not take this into account enough, and simply connecting tools may not be enough to make effective use of agents. Current big models still have limitations in terms of tool invocation, and it is too optimistic to expect MCP to build an efficient tool platform.

MCP is also complicated to implement. Local running servers and desktop-only applications are not good for user experience. AI applications tend to be cloud-based and lightweight, and if MCP is not improved, it will be difficult for users to accept it.

The lack of success of OpenAI's Plugins and Custom GPTs has shown that it is not an easy path for tools to expand their platforms. MCP is trying to surpass it, but I'm afraid it won't be able to do so, and will be forgotten as quickly as Plugins.

Therefore, MCP may only be a short-lived phenomenon in the development of AI, and is unlikely to become mainstream in the future. Although it has experimental value, Harrison Chase's goal is difficult to realize. In contrast, it may be more practical and effective to enhance the capability of the big model itself, or to build more vertical agent applications.

All in all, MCP is unlikely to be a success, and is probably just a hype. I am very skeptical about the future of MCP. Exploring MCP is useful, but its ultimate success is unlikely.

CDN1
May not be reproduced without permission:Chief AI Sharing Circle " LangChain Team on MCP: A New Direction for AI Agent Tool Extension?

Chief AI Sharing Circle

Chief AI Sharing Circle specializes in AI learning, providing comprehensive AI learning content, AI tools and hands-on guidance. Our goal is to help users master AI technology and explore the unlimited potential of AI together through high-quality content and practical experience sharing. Whether you are an AI beginner or a senior expert, this is the ideal place for you to gain knowledge, improve your skills and realize innovation.

Contact Us
en_USEnglish