All posts
seo
monitoring
analytics

How to Monitor SEO Performance as a Developer

Learn what SEO metrics to track, which tools to use, and how to automate monitoring. Google Search Console, Core Web Vitals, indexation status, and more.

January 22, 20266 min
TL;DR

Track 4 metrics: indexed pages, search impressions/clicks, Core Web Vitals, and crawl errors. Set up Google Search Console, run Lighthouse in CI, and use npx indxel check --diff after each deploy to catch regressions automatically.

You built your site with proper metadata, structured data, and a sitemap. Now what? SEO is not a one-time setup — it's an ongoing system that needs monitoring. Pages get deindexed, performance degrades, competitors outrank you. Without monitoring, you're flying blind.

Here's how to set up SEO monitoring as a developer, with the tools and metrics that actually matter.

The metrics that matter

Ignore vanity metrics. Focus on these four:

  • Indexed pages — How many of your pages are in Google's index vs. how many should be
  • Search impressions and clicks — How often your pages appear in search and get clicked
  • Core Web Vitals — LCP, INP, CLS scores from real users
  • Crawl errors — 404s, redirect loops, server errors that block Googlebot

Everything else is secondary. If your pages are indexed, visible, fast, and crawlable, you've covered the technical foundation.

Google Search Console: your primary data source

Google Search Console (GSC) is the only tool that gives you actual Google data — not estimates, not third-party guesses. Set it up first.

Key reports to check regularly:

  • Performance — impressions, clicks, CTR, and average position per page and query
  • Coverage / Indexing — which pages are indexed, excluded, or errored
  • Core Web Vitals — field data from real Chrome users (CrUX)
  • Sitemaps — submission status and discovered URLs

Automate GSC data extraction with the Search Console API. Pull weekly reports into your monitoring stack instead of manually checking the dashboard.

Core Web Vitals monitoring

Core Web Vitals are a ranking signal. Monitor them in two ways:

  • Lab data (Lighthouse, PageSpeed Insights) — synthetic tests you run on demand. Good for catching regressions in CI.
  • Field data (CrUX, GSC) — real user measurements. This is what Google actually uses for rankings.
# Run Lighthouse in CI
npx lighthouse https://yoursite.com --output=json --chrome-flags="--headless"

# Or use PageSpeed Insights API
curl "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://yoursite.com&strategy=mobile"

Track LCP, INP, and CLS over time. Set alerts when any metric crosses the "needs improvement" threshold.

Indexation monitoring

The most common silent SEO failure: pages dropping out of Google's index. This happens when:

  • A deploy accidentally adds noindex
  • Google decides a page is "duplicate" or "low quality"
  • Crawl budget is exhausted on large sites
  • Server errors during Googlebot visits

Monitor the ratio of indexed pages vs. total pages in your sitemap. A sudden drop means something broke.

$ npx indxel check
Indexation status:
  Submitted: 47 URLs
  Indexed:   44 URLs (93.6%)
  Pending:    2 URLs
  Excluded:   1 URL (noindex directive)

Indxel tracks indexation status automatically and alerts you when pages fall out of the index or new pages aren't getting picked up.

For a deeper dive into how indexation works, read our guide on how Google indexing works. For a comparison of monitoring tools, see SEO monitoring tools for developers.

Uptime and availability checks

Googlebot visits your site on its own schedule. If your server is down or slow when it visits, your rankings suffer. Monitor:

  • Server response time (aim for under 200ms TTFB)
  • Uptime percentage (aim for 99.9%+)
  • SSL certificate validity
  • Redirect chain integrity (no loops, no excessive hops)

Use any uptime monitoring service (Uptime Robot, Better Stack, Checkly) and set alerts for downtime longer than 5 minutes.

Automated regression detection

The most effective monitoring is automated comparison between deploys. After each deploy, check:

  • Did any page lose its title or meta description?
  • Did any og:image URL start returning 404?
  • Did the overall SEO score drop?
  • Were any new pages added without proper metadata?
# Compare current state vs. last check
$ npx indxel check --diff

SEO Diff (deploy abc123 vs def456):
REGRESSIONS (1):
- /pricing  og:image 200 -> 404

IMPROVEMENTS (2):
+ /blog/new-post  added meta description
+ /about          added JSON-LD

Score: 91 -> 93 (+2)

Run this in CI after every deploy. Pair it with Slack or Discord webhooks to get notified of regressions instantly.

Building a monitoring dashboard

Consolidate your SEO metrics in one place. The key views you need:

  • Indexation ratio over time (indexed / total pages)
  • SEO score per deploy (trend line)
  • Core Web Vitals trend (LCP, INP, CLS)
  • Top pages by impressions and clicks
  • Recent regressions and fixes

Indxel's dashboard provides this out of the box — one screen with all your SEO metrics, updated after every deploy. No configuration needed beyond connecting your site.

Monitoring checklist

Set up these five things and your SEO monitoring is covered:

  1. Google Search Console — verify ownership, submit sitemap
  2. Core Web Vitals tracking — field + lab data
  3. Indexation monitoring — track indexed vs. submitted pages
  4. npx indxel check --ci in your build pipeline
  5. npx indxel check --diff with deploy notifications

The entire setup takes 15 minutes. After that, SEO regressions get caught automatically, and you only need to check the dashboard when something flags.