Noindex
Noindex is a robots meta tag directive that instructs search engines to exclude a page from their search index, preventing it from appearing in search results.
Add `<meta name="robots" content="noindex">` to pages you do not want in search results: admin panels, staging environments, thank-you pages, paginated archives, or search results pages.
Noindex and robots.txt serve different purposes. Robots.txt prevents crawling; noindex prevents indexing. If you block a page with robots.txt, the crawler cannot see the noindex tag, so the page might still get indexed if other pages link to it.
In Next.js, set `robots: { index: false }` in your metadata export. Indxel validates that pages meant to be indexed do not have accidental noindex tags, and flags pages that should probably be noindexed.
Example
// Next.js App Router
export const metadata = {
robots: { index: false, follow: true },
};
// HTML equivalent
// <meta name="robots" content="noindex, follow" />Related terms
Nofollow
Nofollow is a link attribute (`rel="nofollow"`) that tells search engines not to pass PageRank (link equity) through a specific link, treating it as a hint rather than a directive.
Robots.txt
Robots.txt is a plain text file at the root of a website that instructs search engine crawlers which URLs they are allowed or disallowed from accessing.
Indexation
Indexation is the process by which search engines discover, crawl, and store web pages in their database (index) so they can be returned in search results.
Canonical URL
A canonical URL is an HTML link element that tells search engines which URL is the preferred version of a page, consolidating ranking signals when multiple URLs serve similar content.
Stop shipping broken SEO
Indxel validates your metadata, guards your CI/CD pipeline, and monitors indexation — so you never miss an SEO issue again.