5

First, I found some resources online here and here saying about the same thing:

For a normal/soft reload, the browser will re-validate the cache, checking to see if the files are modified.

I tested it on Chrome. I have a webpage index.html which loads a few javascript files in the end of body. When hitting the refresh button (soft/normal), from the network panel I saw index.html was 304 Not Modified, which was good. However, all the javascript files were loaded from memory cache with status code 200. No revalidation!

Then I tried modifying one of the javascript files. Did the soft reload. And guess what? That file was still loaded from memory cache!

Why does Chrome do this? Doesn't that defeat the purpose of the refresh button?

Here is more information about Chrome's memory cache.

Shawn
  • 2,455
  • 3
  • 14
  • 43
  • How are you serving the files? If your server is adding cache control headers that may be the cause. I would check the Network tab for the cached assets and review their headers. – Rob M. Aug 23 '17 at 01:01
  • `Doesn't that defeat the purpose of the refresh button?` Not really, or that would defeat the purpose of a hard refresh. – Keith Aug 23 '17 at 01:07
  • @Rob The resources I linked in the beginning say that soft reload will "re-validate" the cache, even if the cache is not expired. If you open the page through the address bar, then it will not re-validate the cache if it's not expired. See [here](https://www.raymond.cc/blog/refresh-webpage-with-soft-or-hard-reload-in-web-browsers/). So cache control shouldn't be the cause, right? – Shawn Aug 23 '17 at 01:08
  • I personally never let the browser control my website caching needs. For javascript files it's easy to version them, in the simplest form you can just put a query param on them eg.. `` But better than this is make it automatic from your build tool, I do this using webpack and my url's have the webpack hashes on them. – Keith Aug 23 '17 at 01:13
  • 1
    If the html doesn't get reloaded/updated, then the users still get the old javascript file, aren't they? – Shawn Aug 23 '17 at 01:15
  • The problem is that JS / CSS files get cached with the "Disable cache" checkbox activated in the dev window (no, I don't have server side caching). This is not what anyone would expect. – needfulthing Feb 12 '19 at 11:17

2 Answers2

7

This is a relatively new behaviour which was introduced in 2017 by Chrome browser.

The well-known behaviour of browsers is to revalidate cached resource when the user refreshes the page (either by using CTRL+R combination or dedicated refresh button) by sending If-Modified-Since or If-None-Match header. It works for all resources obtained by GET request: stylesheets, scripts, htmls etc. This leads to tons of HTTP requests that in the majority of cases end with 304 Not Modifiedresponses.

The most popular websites are the ones with constantly changing content, so their users tend to refresh them habitually to get the latest news, tweets, videos and posts. It's not hard to imagine how many unnecessary requests were made every second and as it is said that the best request is the one never made, Facebook decided to address this problem and asked Chrome and Firefox to find a solution together.


Chrome came up with the described solution.

Instead of invalidating each subresource, it only checks if the HTML document changed. If it didn't, it means that it's very likely that everything else also wasn't modified, so it's returned from browser's cache. This works best when each resource has content addressed URL; for example, URL contains a hash of the content of the file. Users can always overcome this behaviour by performing a hard refresh.

Firefox's solution gives more control to developers, and it's on a good way to be implemented by all browser vendors. That is the new Cache-control directive: immutable.
You can find more information about it here: https://developer.mozilla.org/pl/docs/Web/HTTP/Headers/Cache-Control#Revalidation_and_reloading


Resources:

1

Browser caches are a little more complex than simple 200 and 304s than they once were and pay attention to server side directives in headers to tell them how to handle caching for each specific site.

We can adjust the browser caching profiles using various headers (such as Cache-Control) by specifically setting the time before expires you can tell a browser to use the local copy instead of requesting a new fresh copy, these can be quite aggressive in the cases of content you really don't want changed (i.e a companies logo). By doing something like Cache-Control: public, max-age=31536000

Additionally you can also set the Expires header which will allow you to almost do the same as Cache-Control but with a little less control. It just sets the amount of time to pass before the browser considers a asset stale and re-requests. Although with a re-request we could still get a cached result if the not modified response code is sent back from the server.

A lot of web servers have settings enabled to allow more aggressive caching of certain asset files (js, images, css) but less aggressive caching of content files.

John Mitchell
  • 9,398
  • 9
  • 51
  • 87