X-Robots-Tag
X-Robots-Tag is an HTTP response header that provides the same indexing directives as the robots meta tag but can be applied to any file type, including PDFs, images, videos, and API responses.
The robots meta tag only works in HTML documents. For non-HTML resources (PDFs, images, JSON files), use the X-Robots-Tag HTTP header to control indexing. Directives are the same: `noindex`, `nofollow`, `nosnippet`, `noarchive`, etc.
Common use cases: preventing indexing of PDF documents that contain duplicate content, blocking indexing of image files that should not appear in Google Images, and preventing API endpoints from being indexed. You can also target specific crawlers with `X-Robots-Tag: googlebot: noindex`.
In Next.js, set X-Robots-Tag via the `headers()` function in `next.config.js` or in middleware. This is particularly useful for PDF downloads, API routes, and static assets that should not be indexed.
Example
// next.config.js — X-Robots-Tag for non-HTML resources
module.exports = {
async headers() {
return [
{
source: "/api/:path*",
headers: [
{ key: "X-Robots-Tag", value: "noindex, nofollow" },
],
},
{
source: "/downloads/:path*",
headers: [
{ key: "X-Robots-Tag", value: "noindex" },
],
},
];
},
};Related terms
Robots Meta Tag
The robots meta tag is an HTML element in the `<head>` that provides page-level instructions to search engine crawlers about indexing and link-following behavior.
Noindex
Noindex is a robots meta tag directive that instructs search engines to exclude a page from their search index, preventing it from appearing in search results.
Robots.txt
Robots.txt is a plain text file at the root of a website that instructs search engine crawlers which URLs they are allowed or disallowed from accessing.
Stop shipping broken SEO
Indxel validates your metadata, guards your CI/CD pipeline, and monitors indexation — so you never miss an SEO issue again.