Sitemap XML
An XML sitemap is a file that lists URLs on your website along with optional metadata (last modified date, change frequency, priority) to help search engines discover and crawl your pages.
Sitemaps are critical for large sites, new sites with few backlinks, and sites with deep page hierarchies. They give crawlers a map of your content so nothing gets missed.
The sitemap lives at `/sitemap.xml` and is referenced from `robots.txt`. Each URL can include `<lastmod>`, `<changefreq>`, and `<priority>` — though Google ignores changefreq and priority, it does use lastmod.
Next.js generates sitemaps via `app/sitemap.ts`. Indxel validates that your sitemap exists, references only valid URLs, and stays under the 50,000-URL protocol limit.
Example
// app/sitemap.ts (Next.js)
import { MetadataRoute } from "next";
export default function sitemap(): MetadataRoute.Sitemap {
return [
{ url: "https://example.com", lastModified: new Date() },
{ url: "https://example.com/blog", lastModified: new Date() },
];
}Related terms
Robots.txt
Robots.txt is a plain text file at the root of a website that instructs search engine crawlers which URLs they are allowed or disallowed from accessing.
XML Sitemap
An XML sitemap is a structured XML file that lists the URLs on your website, providing search engines with a roadmap for discovering and prioritizing content for crawling.
Crawl Budget
Crawl budget is the number of URLs Googlebot will crawl on your site within a given period, determined by crawl rate limit (server capacity) and crawl demand (page importance).
Indexation
Indexation is the process by which search engines discover, crawl, and store web pages in their database (index) so they can be returned in search results.
Stop shipping broken SEO
Indxel validates your metadata, guards your CI/CD pipeline, and monitors indexation — so you never miss an SEO issue again.