3

I'm trying to avoid0 CORS preflight requests for authorized GET requests, for latency performance reasons. The simple way to do that is putting the access token in a URL query parameter, but this is a bad security practice1.

According to this answer2, the goal of browsers is to block anything that couldn't already be accomplished with HTML tags like img or script. But if that's the case, why is it allowed to set headers like Accept or Content-Langage? You can't set those on an img tag. Also, what's preventing me from hiding my access token in the Accept header like this:

Accept: */*, x-access-token/<access_token>

It seems like the browser policies in this case don't add extra protection, and encourage developers to use insecure practices or nasty hacks. What am I missing?

anderspitman
  • 6,256
  • 5
  • 35
  • 47
  • That img tag interpretation is just over simplified. CORS policy at the implementation level is white-list base, it allows a limit set of headers and some other stuff. If you’re interested you can search for the spec. But back to your question, the why, because it’s white-list base. – hackape May 02 '20 at 17:18
  • 1
    Thanks. But what exactly is it accomplishing, given how easy it is to put arbitrary data in the Accept header? – anderspitman May 02 '20 at 17:19
  • 1
    Its easy but what can be done with that arbitrary data? No harm is done. – hackape May 02 '20 at 17:24
  • Right, so what's the harm in allowing arbitrary custom headers as well? – anderspitman May 02 '20 at 17:27
  • CORS is designed for two things. First it’s a firewall that works on client site, to prevent careless server of site A from serving sensitive data in response to malicious site B. Second, because the previous restriction, it decides some ceremonial rules are needed to make sure that, if any client should bypass that restriction, that client and the corresponding server must go thro some serious negotiation, to ensure they both know what they are doing. – hackape May 02 '20 at 17:31
  • 1
    No harm, but in order to make browser developers life easier, (and to make yours harder) the implementation is white-list base. – hackape May 02 '20 at 17:33
  • I could buy that. So why not whitelist something like 'Custom-Data: ' to allow a simple escape hatch for developers who know what they're doing? – anderspitman May 02 '20 at 17:51
  • I really think I understand the purpose of preflights. My point is that they're easy (but hacky) to workaround, so it would be nice if there was a clean, predefined escape hatch for those who are willing to accept the risks. – anderspitman May 02 '20 at 20:54
  • 2
    As far as the *“why is it allowed to set headers like `Accept` or `Content-Langage`”?* part of this question, see the discussion at https://lists.w3.org/Archives/Public/public-webappsec/2013Aug/thread.html#msg44. *“Accept is pretty random due to plugins. Accept-Language and Content-Language I guess we considered safe enough. Not sure there was any particularly strong rationale”* and *“In the end, it looks somewhat arbitrary because it reflects the vagaries of the evolution in the previous 15 years of the Web platform.”* – sideshowbarker May 02 '20 at 23:47
  • The answer is closed so I can't answer, but the point of the restrictions is to protect servers that predate the specification. So no custom header "escape hatch" will work, since the goal is to protect servers that assumed that no custom headers could ever be added. That's why a separate channel (the `OPTIONS` preflight request) had to be used. The policy *is* adding extra protection for the specific use case it was intended for. That doesn't include your use case, in which you control both the client and the server, so it's not surprising that it's easy for you to work around. – Kevin Christopher Henry May 03 '20 at 05:53
  • What is the risk to a hypothetical server that isn't expecting custom headers? Will it not just ignore them? – anderspitman May 03 '20 at 16:30
  • @anderspitman: One possibility is that it will simply assume the request is coming from the same domain since it knows the browser wouldn't send custom headers from a cross domain. The point here is that the CORS authors considered the same origin policy to be an implicit contract that they couldn't break when it comes to potentially unsafe methods. – Kevin Christopher Henry May 04 '20 at 03:10
  • @anderspitman: You may not agree with that policy or the tradeoffs that were made (I have doubts myself), but given that position it's clear that they can't allow the kind of escape hatch you want to be embedded in the request itself since that would by definition be a new kind of request. Hence the opt-in has to come by way of a separate channel, in this case the `OPTIONS` request. – Kevin Christopher Henry May 04 '20 at 03:10

1 Answers1

1

What's the question?

FYI: You don't actually have a singular question that can be answered. You've got, like, a million.

Your title asks a rather philosophical unanswerable question, but your post is asking for a solution to a use case.

Why do browsers allow setting some headers without CORS, but not others?

Personal / team bias. Politics. Religion.

It seems like the browser policies in this case don't add extra protection, and encourage developers to use insecure practices or nasty hacks. What am I missing?

It's called "Security Theater".

It's when people who know better make a political choice that appears to be so easy (or so difficult) to understand to those who don't have the knowledge to understand such things (or don't have to implement them) just accept it, so that they can get on with their lives - or in the case of Verisign, VPNs, and others - to make a profit.

why is it allowed to set headers like Accept or Content-Langage?

Those are benign headers that don't carry anything particularly identifiable or sensitive

Trying to avoid pre-flights with access tokens

The simple way to do that is putting the access token in a URL query parameter, but this is a bad security practice.

Yes and no.

If it's the session token and it lasts 90 days... sure, there are some downsides... assuming that you're either not using https (which IS bad) or that the attacker already has access to the user's machine (via code or otherwise)... in which case the attacker has access to their email to reset all of their passwords and logins, and probably their MFA (i.e. iMessage / Authy / LastPass) as well so... meh

If it's a short-lived ("short" meaning, say 15 minute) token on non-sensitive data (i.e. social media junk), who cares?

You could also make a single-use token which, assuming you don't put sensitive info in the token itself, would make everyone happy.

Dirty thoughts

Have you considered putting up a single endpoint that can proxy the requests? That's what all the kids are doing these days (looking at you GraphQL).

And if you try hard enough, iframes always have some way to be abused to solve your problem. They're the WD-40 (or duct-tape) of the web. Search your feelings... you know it to be true.

coolaj86
  • 64,368
  • 14
  • 90
  • 108
  • Fair enough, thanks for your input! Unfortunately a proxy won't work for my case, because reasons. Most likely I'm just going to keep using query params, unless I run into issues with caching, then I might use the Accept hack. – anderspitman May 02 '20 at 20:31
  • Please don't use the accept hack. But do limit your token lifetimes and use the queries all day long. – coolaj86 May 02 '20 at 22:54