brijeshvadi/mcp-tool-calling-benchmark
Viewer • Updated • 53 • 27
How to use brijeshvadi/mcp-error-classifier with Grok:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
A fine-tuned text classification model that detects and categorizes MCP (Model Context Protocol) tool-calling errors in AI assistant responses.
This model classifies AI assistant tool-calling behavior into 5 error categories identified during QA testing of Grok's MCP connector integrations:
| Label | Description | Training Samples |
|---|---|---|
CORRECT |
Tool invoked correctly with proper parameters | 2,847 |
TOOL_BYPASS |
Model answered from training data instead of invoking the tool | 1,203 |
FALSE_SUCCESS |
Model claimed success but tool was never called | 892 |
HALLUCINATION |
Model fabricated tool response data | 756 |
BROKEN_CHAIN |
Multi-step workflow failed mid-chain | 441 |
STALE_DATA |
Tool called but returned outdated cached results | 312 |
distilbert-base-uncasedfrom transformers import pipeline
classifier = pipeline("text-classification", model="brijeshvadi/mcp-error-classifier")
result = classifier("Grok responded with project details but never called the Supabase list_projects tool")
# Output: [{'label': 'TOOL_BYPASS', 'score': 0.94}]
@misc{mcp-error-classifier-2026,
author = {Brijesh Vadi},
title = {MCP Error Classifier: Detecting Tool-Calling Failures in AI Assistants},
year = {2026},
publisher = {Hugging Face},
}