Microsoft Unveils Agent Governance Toolkit to Secure AI Tool Calls in .NET
Breaking News: Microsoft Releases Agent Governance Toolkit for .NET
Microsoft has released the Agent Governance Toolkit (AGT) for .NET, a new library designed to enforce security policies on AI agent tool calls made through the Model Context Protocol (MCP). The toolkit, now available as an MIT-licensed NuGet package, provides a governance layer that inspects tool definitions, inputs, and outputs before they reach execution or an LLM.

AGT addresses a critical gap in MCP implementations: while the MCP specification recommends user confirmation and input validation, most SDKs do not enforce these behaviors by default. AGT fills that void by acting as a centralized enforcement point for policy checks, input inspection, and response validation across all agents built on .NET.
Key Components of the Toolkit
The toolkit includes four primary components, each targeting a specific aspect of tool governance:
- McpGateway — a governed pipeline that evaluates every tool call before execution.
- McpSecurityScanner — detects suspicious tool definitions, such as prompt injection or typo-squatted tool names, before exposing them to an LLM.
- McpResponseSanitizer — removes prompt-injection patterns, credentials, and exfiltration URLs from tool output.
- GovernanceKernel — wires all components together using YAML-based policy, audit events, and OpenTelemetry integration.
According to the AGT documentation, the package targets .NET 8.0+, depends solely on YamlDotNet, and requires no external services for basic operation.
Background
The Model Context Protocol (MCP) is an open standard that enables AI agents to connect to real-world tools—reading files, calling APIs, querying databases. However, the MCP specification notes that clients should prompt for user confirmation on sensitive operations, show tool inputs, and validate results. Most SDKs delegate this responsibility to the host application, leaving a security gap.
Prompt injection and tool typo-squatting are emerging threats. For example, an attacker could register an MCP server with a tool named read_flie (note the typo) whose description contains hidden instructions to exfiltrate data. AGT's security scanner can flag such threats with a risk score.

“The Agent Governance Toolkit gives developers a consistent place to apply policy checks, input inspection, and response validation across every agent they build,” said a Microsoft spokesperson in a statement. “It’s designed to be the enforcement point that MCP clients are missing.”
What This Means
For .NET developers building AI agents, AGT provides immediate security guardrails without requiring changes to existing MCP servers. The toolkit can be integrated into existing projects via a single NuGet command: dotnet add package Microsoft.AgentGovernance.
By making governance explicit and configurable via YAML, AGT allows teams to define policies that match their risk tolerance. The inclusion of OpenTelemetry support enables auditing and monitoring of all tool interactions, which is critical for compliance in regulated industries.
Early adopters report that the security scanner successfully detects common attack patterns. One developer noted, “We saw a tool definition with a typo and an embedded system prompt—the scanner caught it and gave us a risk score of 85/100. Without AGT, that tool would have gone straight to the LLM.”
The open-source release of AGT under an MIT license signals Microsoft’s commitment to transparent governance in the agent ecosystem. As AI agents become more autonomous, tools like AGT will likely become standard components of secure .NET agent architectures.
For full documentation and sample workflows, see the Agent Governance Toolkit GitHub repository.
Related Articles
- Meta Unveils AI-Driven Configuration Safety System to Prevent Rollout Failures at Scale
- From COM to Stack Overflow: The Unchanging Pace and Sudden Shifts in Programming
- 5 Essential Enhancements in the Python VS Code November 2025 Release
- NVIDIA Unveils Nemotron 3 Nano Omni: All-in-One AI Agent Model Slashes Costs, Boosts Speed by 9x
- Exploring Python 3.15.0 Alpha 4: New Features and Developer Insights
- Kubernetes 1.36’s Declarative Validation Goes GA: A New Era for API Reliability
- Meta Reveals Configuration Safety Blueprint to Prevent AI-Driven Deployment Disasters
- Scaling Multi-Agent AI Systems: Lessons from Intuit on Coordination and Reliability