Integration Guide
MCP Server + ToolCenter
Give your AI agent 15 web tools in one install. Search, scrape, screenshot, audit, diff โ all through a single Model Context Protocol server. Works with Claude Desktop, Cursor, Claude Code and any MCP client.
What is MCP?
The Model Context Protocol (MCP) is an open standard that lets AI agents securely access tools and data sources through a unified interface. Instead of wiring five separate APIs every time you build an agent, you install one MCP server and get instant, typed access to its tools.
ToolCenter's MCP server exposes 15 curated web tools โ the ones agents actually need โ with outputs optimized for LLM consumption. A 2 MB HTML page becomes ~4 KB of clean Markdown. Your agent spends tokens on reasoning, not parsing.
Why ToolCenter MCP?
One Install, 15 Tools
No more stitching together 5 APIs. Search, scrape, screenshot, SEO audit, DNS, SSL โ all behind a single config block.
LLM-Optimized Outputs
Every tool pipes through an adapter: Mozilla Readability + Turndown for clean Markdown, grouped-by-severity reports for audits. 500ร fewer tokens than raw HTML.
Open Source, Self-Hostable
MIT licensed on GitHub. Point at our managed API or your own ToolCenter instance via TOOLCENTER_BASE_URL.
Install in 60 seconds
Pick your MCP client below.
Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"toolcenter": {
"command": "npx",
"args": ["-y", "toolcenter-mcp"],
"env": {
"TOOLCENTER_API_KEY": "tc_..."
}
}
}
}
Cursor
Settings โ MCP โ Add server:
{
"toolcenter": {
"command": "npx",
"args": ["-y", "toolcenter-mcp"],
"env": {
"TOOLCENTER_API_KEY": "tc_..."
}
}
}
Claude Code (CLI)
One-liner โ adds the server to your user config:
claude mcp add toolcenter \
-e TOOLCENTER_API_KEY=tc_... \
-- npx -y toolcenter-mcp
Other MCP clients
Any stdio-speaking MCP client works โ pass the env var and command:
TOOLCENTER_API_KEY=tc_... \
npx -y toolcenter-mcp
The 15 tools
web_search
Search the web (aggregated engines, time filters)
scrape_url
Fetch a page as clean Markdown (Readability + Turndown)
screenshot
Capture any URL โ full page, device emulation
get_metadata
Title, description, Open Graph, Twitter Card
url_to_pdf
Render URL to PDF โ size, orientation, margins
website_diff
Compare two URLs โ added/removed/modified
analyze_seo
Scored SEO report โ issues grouped by severity
analyze_accessibility
WCAG audit โ violations by impact
check_broken_links
Scan a page for 4xx/5xx/timeout links
detect_tech_stack
Wappalyzer-style framework fingerprinting
preview_link
Slack-style unfurl card for a URL
dns_lookup
A, AAAA, MX, NS, TXT, CNAME, SOA, SRV, CAA
whois_lookup
Registrar, creation/expiry, ownership
check_ssl
TLS cert โ issuer, validity, days to expiry
check_status
HTTP ping โ status, response time, redirects
Build a competitor research agent in 5 minutes
Ask Claude (or any MCP-enabled client):
"Research Linear's top 3 competitors. For each, give me their pricing page content, tech stack, SEO score, and a screenshot of their homepage. Summarize in a table."
The agent automatically chains these MCP tools:
- 1.
web_searchโ find the competitors - 2.
scrape_urlร3 โ read each pricing page as Markdown - 3.
detect_tech_stackร3 โ fingerprint their stack - 4.
analyze_seoร3 โ score each site - 5.
screenshotร3 โ capture their homepages
No glue code. No SDK setup. No 5 separate API keys.
Start building smarter agents
Free tier includes 100 calls/month. Upgrade anytime.