All posts
google
indexation
api

Google Indexing API in Next.js: Push URLs Programmatically

How to use the Google Indexing API to submit URLs for crawling. Service account setup, JWT auth, batch submission, and rate limits.

March 16, 20268 min

You just merged a PR that generates 500 new programmatic SEO pages. Vercel builds the site in 45 seconds. The URLs are live. But Google won't crawl them organically for weeks. You could upload an XML sitemap and wait. You could paste URLs one by one into the Google Search Console interface. Or you could write code to push URLs directly to Google's crawl queue the millisecond your content goes live.

Relying on Googlebot to naturally discover your Next.js app's new routes is unpredictable. The Google Indexing API solves this by exposing a REST endpoint that accepts direct ping notifications. When your database updates, your application calls the API, and Google schedules a priority crawl.

Here is exactly how to configure Google Cloud, authenticate via JSON Web Tokens (JWT), and build a Next.js App Router API route to automate your indexation workflow.

What is the Google Indexing API?

The Google Indexing API is a REST endpoint that allows developers to notify Google when URLs are added, updated, or deleted, triggering a priority crawl request instead of waiting for natural discovery.

The API accepts two types of notification events:

  1. URL_UPDATED: Use this when you publish a new page or update an existing one. Googlebot will crawl the URL, read the DOM, and update the index.
  2. URL_DELETED: Use this when you delete a page and return a 404 Not Found or 410 Gone status code. This removes the URL from search results immediately, preventing users from bouncing off dead links.

Google's official documentation states the Indexing API is restricted to pages with JobPosting or BroadcastEvent structured data. In practice, developers have successfully used the API to index standard articles, e-commerce products, and programmatic SEO pages for years. Google will process the request regardless of schema type.

How do you configure Google Cloud Service Accounts for Indexing?

You must create a Google Cloud Project, enable the Web Search Indexing API, generate a Service Account JSON key, and grant that Service Account "Owner" permissions in Google Search Console.

Without explicit Owner permissions in Search Console, the API will return a 403 Permission Denied error.

Follow these exact steps to provision your credentials:

  1. Go to the Google Cloud Console and create a new project.
  2. Navigate to APIs & Services > Library and search for "Web Search Indexing API". Click Enable.
  3. Go to IAM & Admin > Service Accounts. Click Create Service Account. Name it nextjs-indexer.
  4. Skip the optional role assignments. Click Done.
  5. Click on your new Service Account, go to the Keys tab, and select Add Key > Create new key. Choose JSON. This downloads a file to your machine.
  6. Open Google Search Console. Go to Settings > Users and permissions.
  7. Click Add User. Paste the email address of your Service Account (e.g., nextjs-indexer@your-project.iam.gserviceaccount.com). Set the permission level to Owner.

Save the downloaded JSON file. You will need the client_email and private_key fields for your Next.js environment variables.

How do you authenticate the Google Indexing API in Next.js?

Use the google-auth-library npm package to sign a JWT with your Service Account credentials, requesting the https://www.googleapis.com/auth/indexing OAuth scope.

You do not need the massive googleapis SDK. The standalone google-auth-library is lighter, supports Edge runtimes, and handles token expiration automatically.

First, install the package:

npm install google-auth-library

Add your credentials to your .env.local file. Format the private key carefully—environment variables often strip newline characters.

GOOGLE_CLIENT_EMAIL="nextjs-indexer@your-project.iam.gserviceaccount.com"
GOOGLE_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASC...\n-----END PRIVATE KEY-----\n"

Create a utility function to handle the token generation. This code extracts the private key, handles escaped newlines from the .env file, and requests an access token.

// lib/google-auth.ts
import { JWT } from 'google-auth-library';
 
export async function getIndexingAccessToken() {
  const clientEmail = process.env.GOOGLE_CLIENT_EMAIL;
  // Replace escaped newlines if they exist in the env string
  const privateKey = process.env.GOOGLE_PRIVATE_KEY?.replace(/\\n/g, '\n');
 
  if (!clientEmail || !privateKey) {
    throw new Error('Missing Google Service Account credentials');
  }
 
  const client = new JWT({
    email: clientEmail,
    key: privateKey,
    scopes: ['https://www.googleapis.com/auth/indexing'],
  });
 
  const tokens = await client.authorize();
  return tokens.access_token;
}

How do you build a Next.js API route for URL submission?

Create an App Router route handler at app/api/index/route.ts that receives a webhook from your CMS, validates the secret, and POSTs the URL to the urlNotifications:publish endpoint.

When your content editors hit publish in Sanity, Contentful, or WordPress, that system fires a webhook to your Next.js application. Your application then relays the new URL to Google.

Here is the implementation for the route handler:

// app/api/index/route.ts
import { NextResponse } from 'next/server';
import { getIndexingAccessToken } from '@/lib/google-auth';
 
export async function POST(req: Request) {
  try {
    // 1. Validate the webhook secret to prevent abuse
    const authHeader = req.headers.get('authorization');
    if (authHeader !== `Bearer ${process.env.WEBHOOK_SECRET}`) {
      return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
    }
 
    // 2. Parse the URL from the request body
    const body = await req.json();
    const { url, type = 'URL_UPDATED' } = body;
 
    if (!url) {
      return NextResponse.json({ error: 'Missing URL' }, { status: 400 });
    }
 
    // 3. Get the Google OAuth token
    const accessToken = await getIndexingAccessToken();
 
    // 4. Push to the Indexing API
    const response = await fetch('https://indexing.googleapis.com/v3/urlNotifications:publish', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({
        url: url,
        type: type, // 'URL_UPDATED' or 'URL_DELETED'
      }),
    });
 
    const data = await response.json();
 
    if (!response.ok) {
      console.error('Google API Error:', data);
      return NextResponse.json({ error: data.error.message }, { status: response.status });
    }
 
    return NextResponse.json({ success: true, notification: data });
    
  } catch (error) {
    console.error('Indexing error:', error);
    return NextResponse.json({ error: 'Internal Server Error' }, { status: 500 });
  }
}

To test this locally, use curl to simulate the CMS webhook:

curl -X POST http://localhost:3000/api/index \
  -H "Authorization: Bearer your_dev_secret" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://yourdomain.com/blog/new-post", "type": "URL_UPDATED"}'

A successful response from Google looks like this:

{
  "success": true,
  "notification": {
    "urlNotificationMetadata": {
      "url": "https://yourdomain.com/blog/new-post",
      "latestUpdate": {
        "type": "URL_UPDATED",
        "notifyTime": "2023-10-24T14:32:11.123Z"
      }
    }
  }
}

How do you handle batch URL submissions and rate limits?

Google enforces a strict quota of 200 URL submissions per day per project, but you can group up to 100 URLs into a single multipart/mixed HTTP request to reduce network overhead.

Batching does not bypass the 200 URLs/day limit. It simply reduces the number of HTTP requests your server makes to Google. If you submit 100 URLs in one batch request, that counts as 100 towards your daily quota.

MetricLimitReset Interval
Max URLs per day200Midnight Pacific Time (PT)
Max requests per minute10060 seconds
Max URLs per batch request100Per HTTP request

If you need more than 200 URLs per day, you must submit a quota increase request through the Google Cloud Console. Google typically approves requests for e-commerce and news sites within 48 hours.

Constructing a multipart/mixed request manually requires strict adherence to HTTP boundary formatting. Here is how to implement a batch submission function:

// lib/google-batch.ts
import { getIndexingAccessToken } from './google-auth';
 
export async function submitBatchUrls(urls: string[], type: 'URL_UPDATED' | 'URL_DELETED' = 'URL_UPDATED') {
  if (urls.length > 100) {
    throw new Error('Batch limit is 100 URLs per request.');
  }
 
  const accessToken = await getIndexingAccessToken();
  const boundary = '==============indxel_boundary==';
  
  let multipartBody = '';
 
  urls.forEach((url) => {
    const payload = JSON.stringify({ url, type });
    
    multipartBody += `--${boundary}\n`;
    multipartBody += 'Content-Type: application/http\n\n';
    multipartBody += 'POST /v3/urlNotifications:publish HTTP/1.1\n';
    multipartBody += 'Content-Type: application/json\n';
    multipartBody += `Content-Length: ${payload.length}\n\n`;
    multipartBody += `${payload}\n`;
  });
 
  multipartBody += `--${boundary}--`;
 
  const response = await fetch('https://indexing.googleapis.com/batch', {
    method: 'POST',
    headers: {
      'Content-Type': `multipart/mixed; boundary=${boundary}`,
      Authorization: `Bearer ${accessToken}`,
    },
    body: multipartBody,
  });
 
  // The response will be a multipart string containing individual statuses
  const rawResponse = await response.text();
  return rawResponse;
}

When updating multiple URLs, always batch them. Firing 50 concurrent POST requests to the standard publish endpoint will likely trigger Google's rate limiter and return 429 Too Many Requests.

How to validate URLs in CI/CD before indexing?

Run npx indxel check in your GitHub Actions pipeline to catch missing metadata and broken canonicals before triggering the Indexing API, preventing you from wasting your daily 200-URL quota on invalid pages.

Pushing a URL to Google that returns a 500 error or lacks a <title> tag is worse than doing nothing. Googlebot will crawl the page immediately, log the poor quality, and deprioritize future crawl requests for your domain.

You need to guard your build pipeline. Indxel enforces 15 critical SEO rules locally and in CI. If a page fails validation, the build fails, and the Indexing API is never called.

Add this workflow to .github/workflows/seo-check.yml:

name: SEO Validation
on: [push]
 
jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          
      - name: Install dependencies
        run: npm ci
        
      - name: Build Next.js app
        run: npm run build
        
      - name: Run Indxel SEO checks
        run: npx indxel check --ci --diff

The CLI outputs warnings in the same format as ESLint — one line per issue, with the file path and rule ID. If app/blog/[slug]/page.tsx is missing a canonical URL, the process exits with code 1. It adds exactly 2 seconds to your build.

How does automated indexing impact crawl metrics?

Automated indexing reduces time-to-crawl from an average of 14 days to under 5 minutes, ensuring your fresh content appears in search results immediately while saving hours of manual Search Console work.

Consider a Next.js e-commerce site pushing 150 price updates per day. Relying on an XML sitemap means Googlebot pulls the file whenever its internal scheduler decides to—often 48 to 72 hours later. During that window, search results display stale pricing, leading to customer frustration and lower click-through rates.

By wiring the Indexing API to your inventory webhook, Googlebot requests the exact updated URL within minutes. A single API call replaces the manual labor of navigating to Google Search Console, pasting a URL into the Inspect tool, waiting 30 seconds for the UI to load, and clicking "Request Indexing."

Frequently Asked Questions

Can I use the Indexing API for normal blog posts?

Yes, despite the official documentation stating it is only for JobPosting and BroadcastEvent schema, the API successfully processes crawl requests for all content types. Google treats the API ping as a strong signal to crawl the URL, regardless of the structured data payload on the page.

How fast does Google crawl after an API push?

The API triggers a Googlebot crawl within 5 minutes of a successful 200 OK response. You can verify this by checking your server logs for the Googlebot/2.1 user agent or by viewing the Crawl Stats report in Google Search Console.

Does the Indexing API bypass indexing penalties?

No, the API only bypasses the discovery phase of crawling, not the quality evaluation phase. If your page has thin content, duplicate content issues, or a noindex tag, Google will crawl the URL immediately but will still refuse to index it.

How do I check my remaining daily quota?

You can view your remaining quota by navigating to the Google Cloud Console, selecting your project, and going to APIs & Services > Quotas. Filter the list by "Web Search Indexing API" to see your current usage against the 200-URL daily limit.


Before you wire up your webhooks and start burning your 200-URL daily quota, ensure your pages actually pass baseline technical requirements. Catch the 404s and missing metadata locally.

Run this in your terminal:

npx indxel check