Building SEO Tools with MCP: Model Context Protocol for Developers
How the Model Context Protocol enables AI-powered SEO tools. Architecture, tool design, and real-world examples from the Indxel MCP server.
title: "MCP Server for SEO — Build AI-Powered SEO Tools" description: "How the Model Context Protocol enables AI-powered SEO tools. Architecture, tool design, and real-world examples from the Indxel MCP server." tags: [mcp, ai, tooling]
You pushed a Next.js app redesign on Friday at 4:00 PM. Vercel built it in 45 seconds, the unit tests passed, and it went live. Monday morning, organic traffic dropped 40%. The culprit: 23 pages lost their <meta name="description"> tags because a shared SEOHead component dropped a prop during the refactor. You aren't an SEO specialist, but you hold the pager when the marketing team looks at Google Search Console. Catching these regressions manually wastes engineering hours. Building custom AI scripts to check them is fragile. This is where the Model Context Protocol (MCP) changes how you build and interact with SEO tooling directly inside your editor.
What is the Model Context Protocol (MCP)?
MCP is an open standard that allows LLMs to interact with local data sources and external tools through a unified client-server architecture. It replaces hardcoded, model-specific API integrations with a universal JSON-RPC protocol over standard input/output (stdio).
Historically, giving an LLM access to external tools meant writing specific wrapper code for OpenAI's function calling, Anthropic's tool use, or Gemini's API. If you wanted Claude to audit a webpage, you had to build a custom middleware server, define the OpenAPI schemas, handle the HTTP requests, parse the tool calls, and manage the execution loop.
MCP standardizes this. AI clients (like Claude Desktop or Cursor) act as MCP Clients. They connect to local MCP Servers (like the Indxel CLI), which expose Resources, Prompts, and Tools. When Claude needs to audit a URL, it queries the MCP server, executes the local tool, and reads the standard JSON output.
MCP communicates over standard input/output (stdio) by default. This means the MCP server runs entirely locally on your machine, using your local file system, network access, and environment variables. No data is routed through third-party proxy servers.
Why use MCP instead of custom API integrations?
MCP decouples tool logic from the LLM client, allowing you to write one local TypeScript function and instantly expose it to any compliant AI agent without writing model-specific wrapper code.
If you build a custom OpenAI wrapper, you maintain the API schema, handle the HTTP requests, and manage the execution loop. If you switch to Anthropic next month, you rewrite the wrapper. MCP eliminates this overhead. The LLM client handles the execution loop natively. You just provide the server definition.
| Feature | Custom API Integration | Model Context Protocol (MCP) |
|---|---|---|
| Client Support | Specific to one model (e.g., OpenAI only) | Universal (Claude Desktop, Cursor, Zed, Windsurf) |
| Integration Effort | Requires custom middleware and execution loops | Zero middleware. Handled natively by the AI client |
| Execution Context | Runs on remote servers via HTTP | Runs locally via standard input/output (stdio) |
| Schema Definition | Proprietary JSON formats per vendor | Standardized JSON Schema mapping |
| Security | Requires exposing internal APIs to the internet | Kept entirely local behind your firewall |
How does the Indxel MCP server expose SEO tools?
The Indxel MCP server maps our core Node.js SDK functions to 11 discrete tools, allowing AI clients to run audits, crawl sites, and validate sitemaps directly from your file system.
When you connect the Indxel MCP server, your AI client immediately understands how to execute these specific operations. We intentionally designed these tools to return structured data rather than human-readable text. The LLM handles the interpretation; the tool strictly handles the deterministic validation.
The 11 tools exposed by the Indxel server include:
seo-audit-url: Analyzes a single URL against 15 technical SEO rules.seo-crawl-site: Spiders a domain and returns HTTP status codes and meta data for up to 1000 pages.seo-check-sitemap: Validatessitemap.xmlagainst the actual live routes.seo-validate-jsonld: Parses and lints structured data against schema.org specifications.seo-diff-routes: Compares staging vs production SEO metadata.seo-extract-links: Pulls all internal and externalhrefattributes from a given DOM.seo-check-robots: Evaluatesrobots.txtdirectives against a specific URL path.seo-measure-cls: Calculates Cumulative Layout Shift for a rendered URL.seo-measure-lcp: Calculates Largest Contentful Paint.seo-check-canonicals: Maps canonical chains and flags circular references.seo-audit-headings: Extracts theH1-H6hierarchy and flags accessibility violations.
Implementing the seo-audit-url tool
Let's look at the underlying implementation of the seo-audit-url tool. The tool requires a url and an optional device type. We define this using Zod, which MCP automatically translates into JSON Schema for the LLM.
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";
import { z } from "zod";
import { IndxelCore } from "@indxel/core";
const server = new Server(
{ name: "indxel-seo-server", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
const AuditUrlSchema = z.object({
url: z.string().url().describe("The full HTTP/HTTPS URL to audit"),
device: z.enum(["mobile", "desktop"]).default("mobile").describe("Viewport configuration for the audit"),
});
// 1. Expose the tool schema to the AI client
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "seo-audit-url",
description: "Run 15 critical technical SEO checks against a specific URL",
inputSchema: {
type: "object",
properties: {
url: { type: "string" },
device: { type: "string", enum: ["mobile", "desktop"] }
},
required: ["url"],
},
},
],
};
});
// 2. Handle the execution request
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "seo-audit-url") {
const args = AuditUrlSchema.parse(request.params.arguments);
const indxel = new IndxelCore();
// Execute the deterministic SDK method
const report = await indxel.audit(args.url, { device: args.device });
// Return structured JSON to the LLM
return {
content: [
{
type: "text",
text: JSON.stringify({
score: report.score,
criticalErrors: report.errors.filter(e => e.severity === 'critical'),
passedChecks: report.passed.length
}, null, 2),
},
],
};
}
throw new Error(`Tool not found: ${request.params.name}`);
});
const transport = new StdioServerTransport();
await server.connect(transport);Notice the architecture. The MCP server does not prompt the LLM or format text for a human. It executes indxel.audit(), which runs 15 rules covering title length (50-60 chars), description presence, og:image HTTP status, canonical URL resolution, and JSON-LD validity. It returns raw JSON. The LLM client decides what to do with that JSON.
How do you configure MCP for Claude Desktop and Cursor?
You connect the Indxel MCP server to your AI clients by adding the npx indxel mcp command to their respective configuration JSON files.
The configuration tells the client which executable to run to spin up the standard input/output connection. Since Indxel is an npm package, you execute it via npx.
For Claude Desktop, modify the configuration file located at ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"indxel": {
"command": "npx",
"args": [
"-y",
"@indxel/cli",
"mcp"
]
}
}
}For Cursor, navigate to Settings > Features > MCP and add a new server.
- Type:
command - Name:
indxel - Command:
npx -y @indxel/cli mcp
If your AI client fails to connect to the MCP server, run npx @indxel/cli mcp --test in your terminal. This bypasses the stdio transport and outputs debug logs directly to your console, allowing you to catch missing Node.js dependencies or permission errors.
What does an AI-powered SEO workflow look like in practice?
An AI-powered SEO workflow executes deterministic validation checks via MCP tools and uses the LLM solely to interpret the structured output and write the code to fix it.
Consider a standard Next.js workflow. You are working on a new dynamic route for user profiles: app/users/[id]/page.tsx. You prompt Cursor:
"Audit the local dev server URL
http://localhost:3000/users/492and fix any SEO regressions in mypage.tsxfile."
Cursor reads the prompt, recognizes it has the seo-audit-url tool via MCP, and executes it. The Indxel CLI runs the headless browser against your local localhost port. The LLM receives this JSON payload back from the tool:
{
"score": 72,
"criticalErrors": [
{
"ruleId": "meta-description-missing",
"message": "Page is missing a <meta name=\"description\"> tag.",
"severity": "critical"
},
{
"ruleId": "canonical-mismatch",
"message": "Canonical URL points to HTTP instead of HTTPS.",
"severity": "critical"
}
],
"passedChecks": 13
}Because Cursor has context on your open file (app/users/[id]/page.tsx), it reads the JSON array of errors and generates the precise Next.js Metadata API patch to resolve them:
// Cursor generates this fix automatically based on the MCP output
export async function generateMetadata({ params }: Props): Promise<Metadata> {
const user = await getUser(params.id);
return {
title: `${user.name} | User Profile`,
description: `View the professional profile and portfolio of ${user.name}.`,
alternates: {
canonical: `https://yourdomain.com/users/${params.id}`, // Fixed HTTP to HTTPS
},
};
}A typical Next.js app with 50 pages takes 3 seconds to validate via the Indxel CLI. Running this workflow through Cursor saves developers approximately 45 minutes of manual Chrome DevTools inspection per Pull Request. You catch the missing canonical tag before you commit, instead of waiting for Googlebot to penalize the duplicate content three weeks later.
Frequently Asked Questions
Does MCP require exposing my codebase to an external API?
No, MCP runs locally on your machine and communicates over standard input/output streams. The AI client sends the tool execution request to your local MCP server, executes the CLI command on your hardware, and only the specific tool output is returned to the LLM context window. Your source code is never uploaded to an Indxel server.
How does Indxel compare to Semrush for developer workflows?
Indxel is built strictly for CI/CD and local development, whereas Semrush is built for marketers analyzing search volumes and backlinks. Indxel runs directly in your terminal (npx indxel check), blocks bad builds in GitHub Actions, and integrates natively with your editor via MCP. Semrush requires a web dashboard and offers zero local development tooling. For developers shipping code, Indxel is objectively superior.
Can I run the MCP server in CI/CD pipelines?
No, MCP is a protocol explicitly designed for local AI clients (like Cursor or Claude Desktop) to interact with your machine. For CI/CD environments, use the standard Indxel CLI npx indxel check --ci to fail the build on SEO regressions. The CLI shares the exact same validation engine as the MCP server, but formats the output for standard CI/CD logs instead of JSON-RPC.
# .github/workflows/seo-check.yml
name: SEO Guard
on: [pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run build
- run: npx indxel check --ci --diff origin/mainWhy does the tool return JSON instead of a formatted report?
LLMs process structured data far more accurately than unstructured text. By returning strict JSON arrays with explicit ruleId and severity keys, we guarantee the LLM understands exactly which 3 critical errors failed. If we returned a human-readable markdown report, the LLM would have to parse the string, increasing latency and the risk of hallucination.
Install the Indxel CLI globally and wire it up to your Cursor editor to start auditing local routes.
npm install -g @indxel/cli
npx indxel mcp --init