All posts
performance
web-vitals
seo

Core Web Vitals and SEO: What Developers Actually Need to Know

The real impact of Core Web Vitals on SEO rankings. LCP, INP, CLS thresholds, measurement tools, and practical optimization patterns for React apps.

March 16, 20268 min

You spent a month rewriting your marketing site to the Next.js App Router. You optimized every component. Lighthouse scores hit 98/100 across the board. You merge to main, deploy to production, wait two weeks, and check Google Search Console. Organic traffic is completely flat.

The reality: Core Web Vitals are a tiebreaker, not a primary ranking factor. Shipping a 1.2-second Largest Contentful Paint (LCP) won't save a page with a missing <title> tag or a broken canonical URL. Performance optimizations only translate to SEO rankings when your baseline metadata and content are structurally sound.

If you want to rank, you have to fix your performance bottlenecks while aggressively guarding your SEO metadata. Here is exactly how Core Web Vitals work, how to measure them in React applications, and the code you need to fix them.

Do Core Web Vitals actually impact SEO rankings?

Core Web Vitals act as a tiebreaker for SEO rankings, meaning passing the performance thresholds will only elevate your page above a competitor if your content relevance, metadata, and backlink profiles are identical.

Google evaluates pages using hundreds of signals. Core Web Vitals (CWV) fall under the "Page Experience" signal. If you write a 2,000-word technical guide on React Server Components, and a competitor writes an equally comprehensive guide, Google uses CWV to decide who gets position #1 and who gets position #2.

However, if your guide lacks a proper <h1>, has no meta description, or uses client-side rendering that blocks Googlebot from indexing the content, a perfect 100/100 Lighthouse score will not get you indexed.

Google Search Console does not use your local Lighthouse scores. It uses the Chrome User Experience Report (CrUX), which aggregates real-world data from Chrome users over a rolling 28-day window. If you deploy a performance fix today, you will wait up to 4 weeks to see the pass/fail status update in Search Console.

What are the Core Web Vitals thresholds for 2024?

Google evaluates three metrics for Core Web Vitals: Largest Contentful Paint (LCP) must be under 2.5 seconds, Interaction to Next Paint (INP) under 200 milliseconds, and Cumulative Layout Shift (CLS) under 0.1.

To pass the Core Web Vitals assessment in Google Search Console, 75% of your real page loads (across mobile and desktop independently) must hit the "Good" threshold for all three metrics.

MetricThreshold (Good)Threshold (Poor)What it actually measures
LCP< 2.5s> 4.0sTime until the largest text block or image is fully painted on the screen.
INP< 200ms> 500msThe longest delay between a user click/keypress and the browser painting the next frame.
CLS< 0.1> 0.25A mathematical score of how much visible elements shift without user interaction.

How do you measure Core Web Vitals locally and in production?

Measure lab data locally using the Lighthouse CLI, and capture production field data by wrapping your Next.js application with the useReportWebVitals hook to send real user metrics to your analytics backend.

Local testing is for debugging. Production testing is for SEO reality. To catch regressions before they merge, run Lighthouse in your CI pipeline.

# Install the Lighthouse CI CLI
npm install -g @lhci/cli
 
# Run a local check against an expected threshold
lhci autorun --collect.url="http://localhost:3000" --assert.assertions.categories:performance[warn]=">=0.9"

For production, you need to collect field data. Next.js 14+ provides a built-in hook to extract this data. Create a client component to capture the metrics and pipe them to your logging infrastructure.

// components/web-vitals.tsx
'use client'
 
import { useReportWebVitals } from 'next/web-vitals'
 
export function WebVitals() {
  useReportWebVitals((metric) => {
    // Pipe to your analytics provider (e.g., Datadog, Vercel Analytics, or custom endpoint)
    const body = JSON.stringify({
      id: metric.id,
      name: metric.name,
      value: metric.value,
      rating: metric.rating, // 'good' | 'needs-improvement' | 'poor'
      path: window.location.pathname,
    })
 
    if (metric.rating === 'poor') {
      navigator.sendBeacon('/api/metrics/cwv', body)
    }
  })
 
  return null
}

Include <WebVitals /> in your root layout.tsx. This ensures you have real user data to correlate with the 28-day delayed data you see in Google Search Console.

How do you optimize Largest Contentful Paint (LCP) in Next.js?

Optimize LCP by adding the priority prop to your next/image hero components and using next/font to preload critical typography, eliminating network waterfall delays.

The browser executes a strict sequence to render an image: parse HTML, request CSS, request JS, execute JS, discover the image URL, fetch the image, and decode it. If your hero image is rendered by a client-side React component, the browser doesn't even know the image exists until the JavaScript bundle executes. This guarantees a failed LCP score on 3G connections.

Fix this by forcing the browser to discover the asset immediately.

import Image from 'next/image'
 
export default function HeroSection() {
  return (
    <section>
      <h1>Enterprise SEO Infrastructure</h1>
      {/* 
        The priority prop injects a <link rel="preload"> tag in the document head 
        and adds fetchpriority="high" to the <img> tag. 
      */}
      <Image
        src="/assets/dashboard-preview.png"
        alt="Indxel dashboard showing 44/47 passing routes"
        width={1200}
        height={800}
        priority 
      />
    </section>
  )
}

Fonts cause the second most common LCP failure. If your LCP element is a text node (like an <h1>), the browser will not paint the text until the custom font file downloads. Use @next/font to host fonts locally and preload them automatically.

// app/layout.tsx
import { Inter } from 'next/font/google'
 
// This automatically self-hosts the font and injects a preload tag
const inter = Inter({
  subsets: ['latin'],
  display: 'swap', // Fallback to system font immediately, swap when loaded
})
 
export default function RootLayout({ children }) {
  return (
    <html lang="en" className={inter.className}>
      <body>{children}</body>
    </html>
  )
}

Never use priority on images below the fold. Preloading off-screen images steals bandwidth from critical CSS and JS, which will actively worsen your LCP and First Contentful Paint (FCP) metrics.

How do you fix Interaction to Next Paint (INP) in React?

Fix INP by wrapping heavy state updates in React.startTransition or using useDeferredValue, which yields control back to the main thread so the browser can paint user interactions immediately.

Google replaced First Input Delay (FID) with INP in March 2024. FID only measured the time between a click and the browser beginning to process the event. INP measures the entire lifecycle: input delay, processing time, and presentation delay.

If a user types into a search input, and your React app synchronously filters an array of 5,000 items, the main thread locks up. The browser cannot paint the keystroke in the input box until the filtering completes. If that takes 300ms, your INP is 300ms. You fail.

Fix this by decoupling the visual update of the input from the expensive filtering operation.

'use client'
 
import { useState, useDeferredValue, useMemo } from 'react'
import { heavyFilter } from '@/lib/utils'
 
export function SearchDirectory({ items }) {
  const [query, setQuery] = useState('')
  
  // The deferred value lags behind the actual state during heavy renders.
  // React will yield to the main thread, allowing the input keystroke to paint instantly.
  const deferredQuery = useDeferredValue(query)
  
  // Only recompute when the deferred value changes
  const filteredItems = useMemo(() => {
    return heavyFilter(items, deferredQuery)
  }, [items, deferredQuery])
 
  return (
    <div>
      <input 
        type="text" 
        value={query} 
        onChange={(e) => setQuery(e.target.value)} 
        placeholder="Search 5,000 routes..."
      />
      {/* Show a stale state indicator if deferredQuery hasn't caught up */}
      <div style={{ opacity: query !== deferredQuery ? 0.5 : 1 }}>
        <List items={filteredItems} />
      </div>
    </div>
  )
}

By using useDeferredValue, React handles the setQuery update immediately (painting the character in the input box in < 16ms), and processes the heavyFilter as an interruptible background task. This reduces your INP from >300ms to <20ms.

How do you prevent Cumulative Layout Shift (CLS) during client-side hydration?

Prevent CLS by setting explicit width and height attributes on all media elements and rendering skeleton fallbacks with exact CSS dimensions matching the asynchronous content.

Layout shifts happen when elements are dynamically injected into the DOM above existing content. The most common culprit in React applications is client-side data fetching. You render a generic loading spinner, the fetch completes 500ms later, a 400px tall component mounts, and the footer gets shoved down the screen.

If you cannot use React Server Components to render the layout on the server, you must reserve the exact pixel space before the data arrives.

'use client'
 
import useSWR from 'swr'
 
// The skeleton MUST match the final rendered dimensions exactly.
function PricingSkeleton() {
  return (
    <div className="w-full max-w-2xl h-[450px] bg-gray-100 animate-pulse rounded-lg" />
  )
}
 
export function PricingTable() {
  const { data, isLoading } = useSWR('/api/pricing', fetcher)
 
  if (isLoading) return <PricingSkeleton />
 
  return (
    // The final container matches the skeleton's h-[450px] definition
    <div className="w-full max-w-2xl min-h-[450px] border rounded-lg p-6">
      <h2>Pro Tier</h2>
      <p className="text-4xl font-bold">${data.price}</p>
      {/* features list */}
    </div>
  )
}

For images, always provide width and height. Modern browsers use these attributes to compute the aspect ratio and reserve the space before the image downloads.

<!-- Fails CLS: Browser doesn't know height until image decodes -->
<img src="/hero.jpg" class="w-full h-auto" />
 
<!-- Passes CLS: Browser reserves a 16:9 box instantly -->
<img src="/hero.jpg" width="1600" height="900" class="w-full h-auto" />

How do Core Web Vitals and Indxel validation work together?

Core Web Vitals tools measure how fast your page loads, while Indxel CI checks validate that your SEO metadata is intact before you deploy. You need both to rank.

Developers often obsess over shaving 100ms off their LCP while entirely ignoring the metadata that search engines actually parse. A Next.js app with a 0.8s LCP will not rank if a refactor accidentally wiped out the canonical URLs or pushed title tags that exceed 60 characters.

Lighthouse does not catch nuanced SEO failures. It checks if a title exists, not if it is optimized. Indxel runs 15 specific rules covering title length, description presence, og:image HTTP status, canonical URL resolution, and JSON-LD validity.

Run Indxel in your CI pipeline alongside your performance tests:

$ npx indxel check --ci
Scanning 47 routes...
 
[FAIL] /blog/react-compiler
  Error (seo/title-length): Title exceeds 60 characters (72)
  Error (seo/canonical): Missing canonical URL
  
[PASS] /docs/getting-started
[PASS] /pricing
 
45/47 pages pass. 2 critical errors found.
Process exited with code 1.

If Indxel exits with code 1, your GitHub Action fails. You catch the missing canonical URL before it hits production, preventing the catastrophic traffic drops that take weeks to recover from.

Frequently Asked Questions

Does First Input Delay (FID) still matter for SEO?

No, Google officially deprecated First Input Delay (FID) in March 2024 and replaced it with Interaction to Next Paint (INP). You should remove FID tracking from your custom analytics dashboards, as it no longer impacts Search Console reports or rankings.

Do I need to pass Core Web Vitals on both mobile and desktop?

Yes, Google calculates mobile and desktop metrics independently. Because Google uses mobile-first indexing, failing LCP on mobile devices will hurt your search visibility even if your desktop scores are a perfect 100/100. Test your local builds using Chrome DevTools with 3G throttling enabled.

Will a 100/100 Lighthouse score guarantee indexing?

No, Lighthouse scores have zero direct impact on Google Indexing or ranking. Lighthouse is a lab simulation. Googlebot relies on content quality, backlink authority, correct metadata, and 28-day historical CrUX data to determine ranking position.

What is the fastest way to fix INP issues in Next.js?

Migrate heavy client-side components to React Server Components (RSC). By moving data fetching and initial HTML generation to the server, you ship less JavaScript to the client. Less JavaScript means less main thread blocking, which natively drops your INP times to near zero for initial interactions.


Stop guessing whether your pull requests are breaking your SEO. Performance is half the battle; structural metadata is the other half. Lock down your SEO infrastructure by adding Indxel to your GitHub Actions workflow today.

# .github/workflows/seo.yml
name: SEO Gatekeeper
on: [pull_request]
 
jobs:
  validate-seo:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npm run build
      # Fails the PR if any metadata rules are violated
      - run: npx indxel check --ci --diff