Guide to implementing speculation rules for more complex sites

Published: March 07, 2025

The Speculation Rules API allows users to benefit from a performance boost by either prefetching or prerendering future page navigations to give quicker—or even instant—page navigations.

The API has been specifically designed with ease of implementation in mind, but there are some considerations that complex sites in particular need to consider before using it. This guide helps site owners understand these considerations.

Planning

Three stages: Plan, Implement, Measure with Plan highlighted.

Before implementing speculation rules, it's worthwhile considering how to implement the API (as there are a few choices), and also the costs of speculations (which should guide you as to what pages to speculate).

Decide how to implement speculation rules

One of the first decisions you need to make is how to implement speculation rules on your site, as there are various methods you can use:

  • Directly in the HTML of the page
  • Using JavaScript
  • Using an HTTP Header

Ultimately, every method has the same effect, but there can be advantages in terms of ease of implementation and flexibility.

Sites should choose the option that suits them best, and can even use a combination of these options if necessary. Alternatively, they may be implemented using a plugin (such as the Speculative Loading plugin for WordPress) or libraries or platforms which may make the choice for you, but it's still worth being aware of the options available.

Include speculation rules directly in the HTML of the page

Speculation Rules can be implemented directly on the page by including the <script type="speculationrules"> element in its HTML. This can be either added at build time for static sites using templates, or at run time by the server when the page is requested. They can even be injected in HTML by edge workers (though the HTTP header method discussed later in this guide is probably easier for that).

This lets you include static rules across the whole site, but document rules can still be dynamic by allowing you to choose the URLs to render from the page by using rules to be triggered by CSS classes:

<script type="speculationrules">
  {
    "prerender": [{
      "where": { "selector_matches": ".prerender" }
    }],
    "prefetch": [{
      "where": { "selector_matches": ".prefetch" }
    }]
  }
</script>

The previous script will prerender links with a prerender class, and similarly prefetch when a link has a prefetch class. This lets developers include these classes in the HTML to trigger speculations.

On top of including links these classes in a page's initial HTML, links will also be speculated if those classes are added dynamically by your app, which allows your app to trigger (and remove) speculations as needed. This can be simpler than creating or removing more specific speculation rules. It's also possible to include multiple speculation rules per page per page if you want a base rule used by most of the site, and page-specific rule(s).

Alternatively, if you do need to use more specific speculation rules, then page-specific or template-specific rules can allow different rules for certain pages or page types.

Lastly, server-side rendered pages can also have more dynamic rules based on whatever information is available to the server—such as analytics information for that page or common user journeys for certain pages.

Add speculation rules using JavaScript

An alternative to including the rules in an on-page script is to inject them using JavaScript. This may require less updates to page templates. For example, having a tag manager inject the rules can be a quick way of rolling out speculation rules (and also allows for turning them off quickly if needed).

This option also allows dynamic, client-side rules based on how the user interacts with the page. For example, if the user adds an item to the basket, you could prerender the checkout page. Alternatively, this can be used to trigger speculations based on certain conditions. While the API includes an eagerness setting that allows for basic interaction-base rules, JavaScript allows developers to use their own logic to decide when and on what page(s) to speculate.

As mentioned previously, an alternative approach to inserting new rules is to have a base document rule on the page and have JavaScript trigger document rules by adding appropriate classes to links causing them to match the rule.

Add speculation rules using an HTTP header

The final option for developers is to include the rules using an HTTP header:

Speculation-Rules: "/speculationrules.json"

There are some additional requirements as to how the rules resource (/speculationrules.json in this example) is delivered and used.

This option allows for easier deployment by CDNs without the need to alter document contents. It does mean altering the speculation rules dynamically using JavaScript is not an option. However, document rules with CSS selector triggers can still allow dynamic changes—for example, by removing the prerender class from a link.

Similar to the JavaScript option, implementing speculation rules with an HTTP header allows them to be implemented independently of the site's content, which can make it easier to add and remove the rules without a full site rebuild.

Consider cost implications

Before implementing speculation rules, it pays to take a little time to consider the cost implications for both your users and your site with this API. Costs include bandwidth (costing both users and sites money!) and processing costs (on both the client and server side).

Consider cost for users

Speculatively loading means making an educated guess as to where a user may navigate to new. However, if that navigation does not take place, then you may have wasted resources. This is why you should be conscious of the impact on users, in particular:

  • Extra bandwidth used to download those future navigation—especially on mobile where bandwidth may be more constrained.
  • Extra processing costs to render those pages when using prerender.

With completely accurate predictions, there are no extra costs, because visitors will visit those pages next, with the only difference being that those costs are front-loaded. However, predicting the future with complete accuracy is impossible, and the more aggressive the speculation strategy, the higher risk of wastage.

Chrome has carefully considered this problem and the API includes a number of features that mean the cost is a lot lower than you may think. In particular by reusing the HTTP cache, and not loading cross-origin iframes, the cost of prerendering a navigation on the same site is often considerably smaller than a full page with no cached resources, making speculative loads less costly than may be assumed.

Even with these safeguards, however, sites should carefully consider what pages to speculate, and the user cost of such speculations. Good candidates for speculative loading include those that can be reasonably predicted with a high degree of confidence (perhaps based on analytics, or common user journeys) and when the cost is low (for example, less rich pages).

You may also want to consider what JavaScript should be delayed until activation. Similar to lazy loading content until it's needed, this can make prerenders cheaper, but provide a much improved user experience. With cheaper speculations, you may feel comfortable speculating more frequently or eagerly.

Where this is not possible, a less aggressive strategy using moderate, or conservative, eagerness rules is recommended. Alternatively, you can use prefetch, which has considerably less cost than prerendering when confidence is low, and then upgrade to a full prerender if confidence grows—for example, as a link is hovered or actually clicked.

Consider extra backend load

As well as considering the extra costs from the user, site owners should consider their own infrastructure costs. If every page results in two, three or even more page loads, then backend costs may increase by using this API.

Ensuring your pages and resources are cacheable will significantly reduce the amount of origin load, and therefore the overall risk. When coupled with a CDN, your origin servers should see minimal extra load—though do consider any CDN cost increases.

A server or CDN can also be used to control the speculation results as identified by the sec-purpose HTTP header. For example, Cloudflare's Speed Brain product only allows speculations that are already cached at a CDN's edge server, and won't send requests back to the origin.

However, as speculative loads typically are used for same-origin page loads, users will have shared resources in their browser's cache already—assuming they're cacheable in the first place—so again, a speculation is usually not as costly as a full page load.

Find the balance between speculating too much or too little

The key to making the most of the Speculation Rules API is to find the balance between speculating too much—that is, when the costs are needlessly paid and the speculation isn't used—and too conservatively—either too little, or too late where little benefit is realized.

Where costs are cheap (for example, small, statically generated pages cached on CDN edge nodes), you can afford to be more aggressive with speculations.

However, for larger, richer pages that perhaps cannot be cached at the CDN edge, more care should be taken. Similarly, resource-intensive pages can use up network bandwidth or processing power which can negatively impact the current page. The aim of the API is to improve performance, so performance regressions are definitely not what we want! This is another reason to keep prerenders down to one or two pages at most (note also that Chrome limits to two or ten prerenders at a time depending on eagerness).

Steps to implement speculation rules

Three stages: Plan, Implement, Measure with Implement highlighted.

Once you've decided how to implement speculation rules, you next need to plan what to speculate and how to roll this out. Simpler sites, like static personal blogs, may be able to jump straight to full prerender of certain pages, but more complex sites have additional complexities to consider.

Start with prefetch

Prefetch is usually relatively safe to implement for most sites and this is the initial approach taken by many including large scale rollouts like Cloudflare and WordPress.

The main issues to be aware of are that if prefetching a URL will cause any state changes and server-side costs, particularly for uncacheable pages. Ideally state changes —for example, prefetching a /logout page—shouldn't be implemented as GET links, but sadly this is not uncommon on the web.

Such URLs can be specifically excluded from the rules:

<script type="speculationrules">
  {
    "prefetch": [{
      "where": {
        "and": [
          { "href_matches": "/*" },
          { "not": {"href_matches": "/logout"}}
        ]
      },
      "eagerness": "moderate"
    }]
  }
</script>

Prefetches can be limited to common navigations from one page to another, or for all same-origin links on hover or click using the moderate or conservative eagerness setting. The conservative setting carries the lowest risk, but also the lowest potential reward. If starting there, then aim to advance to moderate at least, but ideally beyond this to eager will yield more performance benefits (and then further upgrading to prerender where this makes sense).

Low-risk prerenders

Prefetch speculations are easier to deploy but the ultimate performance benefit to the API is with prerender. This can have some extra considerations when the page is not visited shortly after speculating (that we'll cover in the next section), but with a moderate or conservative prerender, where navigation is likely to happen shortly afterwards may be a relatively low risk next step.

<script type="speculationrules">
  {
    "prerender": [{
      "where": {
        "and": [
          { "href_matches": "/*" },
          { "not": {"href_matches": "/logout"}}
        ]
      },
      "eagerness": "moderate"
    }]
  }
</script>

Prefetch common pages to improve non-eager prerenders

One common tactic is to prefetch a smaller number of frequently visited next pages on load with an eager setting (either by specifying them in a URL list or using selector_matches), and then prerendering with a moderate setting. As the HTML prefetch is likely to have completed by the time the link is hovered, this gives a boost over just prerendering on hover without a prefetch.

<script type="speculationrules">
  {
    "prefetch": [{
      "urls": ["next.html", "next2.html"],
      "eagerness": "eager"
    }],
    "prerender": [{
      "where": {
        "and": [
          { "href_matches": "/*" },
          { "not": {"href_matches": "/logout"}}
        ]
      },
      "eagerness": "moderate"
    }]
  }
</script>

Earlier prerenders

While moderate document rules allow for relatively low-risk use of the API with an associated ease of implementation, this often isn't enough time for a full prerender. To achieve instant navigations that this API allows, you'll likely need to go beyond that and prerender pages more eagerly.

This in achieved with a static list of URLs (like the prefetch example previously) or with selector_matches identifying a small number of URLs (ideally one or two pages), with document rules covering the other URLs:

<script type="speculationrules">
  {
    "prerender": [
      {
        "where": {
          "selector_matches": : ".prerender"
        },
        "eagerness": "eager",
      },
      {
        "where": {
          "and": [
            { "href_matches": "/*" },
            { "not": {"href_matches": "/logout"}}
          ]
        },
        "eagerness": "moderate"
      }
    ]
  }
</script>

This may require traffic analysis to give the best chance of accurately predicting the next navigation. Understanding typical customer journeys through your site can also help to identify good candidates for speculative loading.

Moving to more eager prerendering may also introduce more considerations around analytics, ads, and JavaScript and the need to keep a prerendered page up to date, or even to cancel or refresh speculations on state changes.

Analytics, ads, and JavaScript

When using prerender, more complex sites must also consider the impact on analytics. You usually don't want to log a page (or ad) view when the page is speculated, and only when the speculation is activated.

Some analytics providers (such as Google Analytics) and ad providers (such as Google Publisher Tag) support speculation rules already, and won't log views until the page is activated. However, other providers or custom analytics you have implemented, may need extra consideration.

You can add checks to JavaScript to prevent execution of specific bits of code until pages are activated or made visible, and even wrap whole <script> elements in such checks. Where pages use tag managers to inject such scripts, it may be possible to tackle these all in one go by delaying the tag manager script itself.

Similarly, consent managers are an opportunity to delay third-party scripts until activation and Google has been working with various consent manager platforms to make them prerender-aware and we're happy to help others looking to do the same. PubTech is one such company that offers developers the choice to run or block its JavaScript during prerendering.

For application code, you can similarly add a change to delay execution of code until activation, especially where the page does not require the JavaScript code to render. This is a quicker and safer option, but does mean all the code will run at once on activation. This can result in a lot of work at activation time which can impact INP, especially as the page may look fully loaded and ready to interact with.

Additionally, if any content depends on JavaScript (for example, client-side rendered content), delaying this will reduce the positive impact on LCP and CLS that prerendering can bring. A more targeted approach to allow more of the JavaScript to run during the prerendering phase will result in a better experience, but may be less trivial to implement.

Starting with delaying a lot of script tags completely can be a good strategy for more complex sites initially. However, to get the most benefits out of the API, allowing as much JavaScript to run during prerendering should be the ultimate goal.

Sites with analytics or ads concerns may also want to start with prefetch, where these are less of a concern, while they consider what needs to be done to support prerendering.

Update prerender speculations

When prerendering pages in advance of navigations there is a risk that the prerendered page becomes out of date. For example, an ecommerce site a prerendered page may include a checkout basket—either a full basket of items or even just a counter showing the number of items in the basket on other pages. If more items are added to a basket and then a prerendered page is navigated to, it would be confusing to the user to see the old checkout state.

This is not a new problem and when users have multiple tabs open in the browser they experience the same issue. However, with prerendered pages this is both more likely and more unexpected since the user did not knowingly initiate the prerender.

The Broadcast Channel API is one way to allow one page in the browser to broadcast updates like this to other pages. This would also solve the multiple tabs problem. Prerendered pages can listen to broadcast message—though can't send their own broadcast messages until activated.

Alternatively, prerendered pages can get updates using the server (using a periodic fetch(), or a WebSocket connection), but with potentially lags in updates.

Cancel or refresh prerender speculations

Updating a prerendered pages is the recommended approach to continue to use prerendered pages while avoiding confusion to users. Where this is not possible, it is possible to cancel speculations.

This can also be used to remain within Chrome's limits if sites want to prerender other pages which are more likely to be visited.

To cancel speculations, you need to remove the speculation rules from the page—or remove classes or other matching criteria if using that approach. Alternatively, the speculated page can call window.close() if it detects it is no longer current. Though if the page is able to detect this, then a better option would be to update its state to bring it back up to date.

It is also possible to reinsert these rules (or matching criteria) so pages can be re-prerendered (though again, keeping an existing page up to date is usually the better option as it is less wasteful). After speculation rules are removed, the reinsertion must be completed in a new microtask or later, to allow the browser to notice the removals and cancel the speculations. One approach to delete and remove all speculation rules scripts is shown in the following example:

async function refreshSpeculations() {
  const speculationScripts = document.querySelectorAll('script[type="speculationrules"]');

  for (const speculationScript of speculationScripts) {
    // Get the current rules as JSON text
    const ruleSet = speculationScript.textContent;

    // Remove the existing script to reset prerendering
    speculationScript.remove();
    
    // Await for a microtask before re-inserting.
    await Promise.resolve();

    // Reinsert rule in a new speculation rules script
    const newScript = document.createElement('script');
    newScript.type = 'speculationrules';
    newScript.textContent = ruleSet;
    console.log(newScript);

    // Append the new script back to the document
    document.body.appendChild(newScript);
  }
}

Removing rules will cancel existing pretenders (or prefetches) but reinserting the rules will only speculate immediate or eager rules (including URL list rules using the default of immediate). However, moderate or conservative speculations will be removed but not automatically retriggered until the link is interacted with again.

This refresh option is not restricted to JavaScript-inserted rules. Static rules included in the HTML can also be removed or changed in the same way, since this is a standard DOM change. HTTP speculation rules cannot be removed, but matching criteria (for example, prerender classes) can be removed, and re-added by JavaScript.

Chrome is also looking at adding Clear-Site-Header support to allow server responses to cancel prerenders (for example, when an update basket request is made).

Measure impact

Three stages: Plan, Implement, Measure

After implementing speculation rules, you should measure success and not just assume it's automatically faster. As mentioned previously, overspeculation can actually cause performance regressions if the client or server is overworked.

When implementing with multiple steps (prefetch, low-risk prerenders and then early prerenders) you should measure with each step.

How to measure success

Speculation rules should have a positive impact on key performance metrics like LCP (and possibly also on CLS and INP), but these may not be obvious in overall site-level metrics. This is because sites may predominantly be made up of other navigation types (for example, landing pages) or because same-origin navigations are already fast enough that even dramatically improving them may not affect the 75th percentile metrics as reported in the Chrome User Experience report (CrUX).

You can use the page navigation types in CrUX to check what percentage of navigations are navigate_cache or prerender and if that's increasing over time. However, for detailed analysis you may need to use Real User Monitoring to segment your data into speculated navigations to see how much faster they are than other navigations.

How to measure usage and wastage

Another key consideration is to measure if you are speculating on the correct pages. This both avoids waste, and ensures you're targeting the best pages to gain from this API.

Unfortunately, the page initiating the speculations is not able to directly see the status of speculation attempts. Additionally, attempts cannot presumed to have been triggered since the browser may hold back speculations in certain circumstances. These must therefore be measured on the page itself. This also requires checking two APIs to see if the page is speculating or has speculated:

if (document.prerendering) {
  console.log("Page is prerendering");
} else if (performance.getEntriesByType("navigation")[0]?.activationStart > 0) {
  console.log("Page has already prerendered");
} else {
  console.log("This page load was not using prerendering");
}

This page can then log the speculation attempt to backend servers.

One complication with analytics is providers (such as Google Analytics) that are prerender-aware and ignore analytics calls until the page is activated—even separate event calls. Therefore Google Analytics users must use another option server-side logging option.

It is also possible to this client-side, whereby each prerendered page logs the prerender in shared storage, and the calling page reads this. localStorage works best since it can be read on navigating away from a page (note that sessionStorage cannot be used since it has special handling for prerendered pages). However, be aware that localStorage is not transactionally safe and other pages may be updating this in the same time if multiple pages are prerendered. This demo uses a unique hash and individual entries to avoid issues with this.

Conclusion

Speculation rules offer the possibility of a dramatic performance boost to your page performance. This guide gives advice on considerations when implementing this API to avoid any potential issues, and also get the most out of the API.

Up front planning the implementation will avoid rework. Particularly for more complex sites, should be followed by a multi-step rollout starting with prefetch before moving on to low-risk prerender and then early prerender. Finally it's important to measure the improvements, and any usage and wastage to ensure you're making optimal use of the API.