Skip to content

MCP server: The connecting fabric between AI, data and tools

by Patrick Kraus-Füreder

How the Model Context Protocol securely connects AI systems with files, APIs and technical systems

AI systems are increasingly becoming active players: they not only read data, but also trigger workflows, process tickets, analyze measured values and interact directly with technical systems. For such applications to function securely, traceably and across systems, a standard is needed that reliably connects models with files, APIs, repositories or edge devices. This is precisely the role of the Model Context Protocol (MCP) – an open protocol that cleanly decouples AI clients and enterprise systems and connects them via clearly defined tools, resources and policies. The following article shows why MCP is becoming relevant for modern industrial, engineering and IT environments, how MCP servers work and which architecture and security principles are crucial in practice.

Contents

  • Why MCP is relevant now
  • What MCP is – and what an MCP server does
  • Architecture – from the client host to the server topology
  • Security, governance & quality – more than “just a tool call”
  • Industrial Grade: Edge, latency & data sovereignty
  • Interoperability & ecosystem
  • Typical fields of application (industry)
  • Performance and operational aspects
  • From the idea to the system: process model
  • Does MCP make sense for us?
  • Sources (selection)
  • Contact us
  • Author
  • Read more

In short, modern AI applications need more than just a strong model. They need reliable, secure and standardized access to data sources, tools and workflows – on-prem, in the cloud and increasingly at the edge. This is precisely where the Model Context Protocol (MCP) comes in: an open standard through which AI clients (such as Claude, Copilot environments or your own agents) interact with external systems in a controlled manner. MCP servers are the counterparts on the system side: they expose data, tools and resources in clearly defined, auditable interfaces. You can think of MCP as a “USB-C for AI apps” – standardized connector, many devices.

Why MCP is relevant now

Three developments have overlapped in the last 18-24 months that now make MCP strategically relevant. Firstly, the official specification and stable reference stacks provide a consolidated basis for the first time. Secondly, companies are increasingly demanding AI assistants that can not only process texts, but also carry out specific actions – such as creating tickets, changing repositories or automatically filing reports. Thirdly, edge deployments have reached a level of maturity that offers stable latencies and powerful hardware directly on the machine or line. This makes MCP a realistic option, both technically and economically, and not just an interesting demo. At the same time, the first ecosystem components are professionalizing operations: publicly accessible documentation and specifications, official example and product servers such as the GitHub MCP Server and the first registries for discovering compatible servers reduce the integration effort and create security when selecting tools.

What MCP is – and what an MCP server does

MCP standardizes the connection between a client – i.e. an AI application or an agent – and a server that acts as an adapter into the existing system. An MCP server bundles executable functions with defined interfaces, provides structured read access to internal data sources and can also offer curated prompt modules to keep interactions consistent. The technical basis is a clearly described protocol at JSON-RPC level, supplemented by schemas and transport channels for local execution via stdio and for remote connection via HTTP, WebSocket or SSE. For companies, this means a reproducible interaction framework, clean traceability of results and a clear location where access and security rules can be enforced.

Architecture – from the client host to the server topology

A typical MCP setup consists of an AI-enabled application – such as an IDE plugin, a chat frontend or a desktop app that acts as an MCP client and triggers all tool calls. Examples of this are Claude Desktop or KOALA. Behind this are one or more MCP servers, each of which encapsulates a functional domain, such as GitHub, a file repository or an internal MES, and provides clearly named tools and resources there. Communication takes place either locally via stdio, which is particularly suitable for desktop and developer workflows, or network-based via HTTP, WebSocket or SSE if central or edge servers are connected. Clear schema and version management ensures that tool signatures, parameter types and error returns remain stable and can be further developed without disruption. On this basis, three architectural patterns for MCP infrastructures that closely follow common microservice and API design principles have proven themselves in practice.

SampleDescriptionCorrespondence in software architectureSource
Adapter serverConnects an existing system or API to the MCP protocol. Reduces integration effort, separates logic and interface. Adapter pattern / ports-and-adapters architectureMedium, Wikipedia
Proxy / AggregatorCombines several backends or servers under a common interface, enables request routing or data aggregation.API gateway and aggregator patternmicroservices.io, Medium
Domain serverOrganizes tools according to business domains instead of source systems, promotes semantic coherence and reuse.Domain Service and Bounded Context Principle (DDD)Wikipedia

Security, governance & quality – more than “just a tool call”

With MCP, the operational reach of an agent grows , which significantly increases the requirements for security, governance and quality . Precise control of authentication and authorization is key, so that only those functions are activated that are really necessary in the respective environment and cover exactly the permitted actions. Consistent validation of all inputs and outputs is just as important: strict schemas, clean parameterization and fail-closed behaviour prevent agents from exploiting or passing on unclear statuses or misinterpretations. Productive systems also need complete auditability with seamless logs, clear audit trails and alerts in the event of policy violations. In addition, robust testing and hardening procedures must be established – including negative tests against prompt-induced incorrect actions, possible manipulation of tool definitions or the misuse of generic functions. Initial security analyses show that MCP has its own points of attack; a security-by-design approach is therefore essential.

Industrial Grade: Edge, latency & data sovereignty

Many industrial workflows require low latency and high availability – ideal for edge MCP servers close to the machine (e.g. on the line), combined with centralized services for registry, observability and policy. Advantage: Data does not leave the zone, decisions remain fast and integration into OT/IT networks remains controllable. The latest integration paths in common AI clients also improve the remote server model when Edge is not required.

Interoperability & ecosystem

The official MCP portal and the GitHub organization modelcontextprotocol bundle specification, SDKs and documentation of the Model Context Protocol. Growing lists of example and product servers facilitate selection and integration. A prominent reference example is the GitHub MCP Server, which shows how infrastructure platforms can implement native MCP bridges. Community initiatives such as “Awesome MCP” lists and public registries that structure discovery, compatibility and update management are emerging in parallel.community collections (“Awesome” lists) and registries that structure discovery, compatibility and updates. This shortens proof-of-concept phases and reduces vendor lock-in.

Typical fields of application (industry)

In the industrial sector, MCP is particularly strong where structured, auditable and repeatable interactions between AI agents and existing systems are required. A large block concerns knowledge access: technical manuals, test plans or change logs can be connected via standardized resource and search tools, and as soon as special requirements arise, a classic RAG system is usually no longer sufficient – a dedicated MCP tool then becomes the logical next step. In quality assurance and QA pipelines , agents can retrieve measurement data, start image or signal analyses, store versioned results and escalate automatically in the event of deviations. In engineering , MCP servers enable controlled access to repositories, the creation of issues and PRs or the checking of build artefacts – including role and rights protection directly in the interaction path. Operations and IT service management also benefit: Runbooks, tickets, CMDB queries or telemetry data can be executed and documented uniformly instead of having to rely on heterogeneous integration scripts. Many of these patterns already appear in documentation, product announcements and sample repositories, which reduces the implementation risk and provides clear templates for your own domains.

Performance and operational aspects

The performance of an MCP system is essentially determined by three factors: pre- and post-processing at CPU level, the latency of the connected backends and networks and – if models are executed within the server or complex transformations are carried out – the local GPU or NPU load. For productive environments, it has been shown that a consistently asynchronous architecture with clearly defined timeouts and circuit breakers per tool is crucial. Equally important is an observability strategy that records latencies, error rates and tool usage patterns as standard and correlates them with user or session data using structured logs. Stability over longer operating phases is achieved through clean schema versioning and compatible migration paths so that client updates do not block unplanned processes. In addition, resilience mechanisms such as fallback variants of a tool – such as read-only instead of write – or dry run modes for sensitive operations increase operational security. The official documentation and technical contributions from platforms such as GitHub or Anthropic already provide a reliable basis for this.

Sources (selection)

Contact us









    Authors

    DI Dr. Patrick Kraus-Füreder, BSc

    AI Product Manager

    Read more