292

When comparing an HTTP GET to an HTTP POST, what are the differences from a security perspective? Is one of the choices inherently more secure than the other? If so, why?

I realize that POST doesn't expose information on the URL, but is there any real value in that or is it just security through obscurity? Is there ever a reason that I should prefer POST when security is a concern?

Edit:
Over HTTPS, POST data is encoded, but could URLs be sniffed by a 3rd party? Additionally, I am dealing with JSP; when using JSP or a similar framework, would it be fair to say the best practice is to avoid placing sensitive data in the POST or GET altogether and using server side code to handle sensitive information instead?

James McMahon
  • 45,276
  • 62
  • 194
  • 274
  • 1
    There is a nice blog entry about this on Jeff's blog Coding Horror: [Cross-Site Request Forgeries and You](http://www.codinghorror.com/blog/archives/001171.html). – fhe Oct 13 '08 at 18:10
  • Wouldn't you use POST for most things. E.g for an API, say you needed to GET data from a DB, but before the server returns data you would have to be authenticated first? Using post you would simply pass your session ID + all the parameters you need for the request. If you used a GET req for this then your session ID could easily be found either in your browser history or somewhere in the middle. – James111 Dec 19 '15 at 01:45
  • I remember this discussion from before the war (99' or '00 or so) when https wasn't prevalent. – David Tonhofer Apr 12 '17 at 09:36
  • @DavidTonhofer, which war are you referring to? The browser war? – DeltaFlyer Nov 09 '18 at 01:38
  • @DeltaFlyer No, the Forever War on Stuff, aka GWOT. What have we done. – David Tonhofer Nov 09 '18 at 22:52

28 Answers28

434

The GET request is marginally less secure than the POST request. Neither offers true "security" by itself; using POST requests will not magically make your website secure against malicious attacks by a noticeable amount. However, using GET requests can make an otherwise secure application insecure.

The mantra that you "must not use GET requests to make changes" is still very much valid, but this has little to do with malicious behaviour. Login forms are the ones most sensitive to being sent using the wrong request type.

Search spiders and web accelerators

This is the real reason you should use POST requests for changing data. Search spiders will follow every link on your website, but will not submit random forms they find.

Web accelerators are worse than search spiders, because they run on the client’s machine, and "click" all links in the context of the logged in user. Thus, an application that uses a GET request to delete stuff, even if it requires an administrator, will happily obey the orders of the (non-malicious!) web accelerator and delete everything it sees.

Confused deputy attack

A confused deputy attack (where the deputy is the browser) is possible regardless of whether you use a GET or a POST request.

On attacker-controlled websites GET and POST are equally easy to submit without user interaction.

The only scenario in which POST is slightly less susceptible is that many websites that aren’t under the attacker’s control (say, a third-party forum) allow embedding arbitrary images (allowing the attacker to inject an arbitrary GET request), but prevent all ways of injecting an arbitary POST request, whether automatic or manual.

One might argue that web accelerators are an example of confused deputy attack, but that’s just a matter of definition. If anything, a malicious attacker has no control over this, so it’s hardly an attack, even if the deputy is confused.

Proxy logs

Proxy servers are likely to log GET URLs in their entirety, without stripping the query string. POST request parameters are not normally logged. Cookies are unlikely to be logged in either case. (example)

This is a very weak argument in favour of POST. Firstly, un-encrypted traffic can be logged in its entirety; a malicious proxy already has everything it needs. Secondly, the request parameters are of limited use to an attacker: what they really need is the cookies, so if the only thing they have are proxy logs, they are unlikely to be able to attack either a GET or a POST URL.

There is one exception for login requests: these tend to contain the user’s password. Saving this in the proxy log opens up a vector of attack that is absent in the case of POST. However, login over plain HTTP is inherently insecure anyway.

Proxy cache

Caching proxies might retain GET responses, but not POST responses. Having said that, GET responses can be made non-cacheable with less effort than converting the URL to a POST handler.

HTTP "Referer"

If the user were to navigate to a third party website from the page served in response to a GET request, that third party website gets to see all the GET request parameters.

Belongs to the category of "reveals request parameters to a third party", whose severity depends on what is present in those parameters. POST requests are naturally immune to this, however to exploit the GET request a hacker would need to insert a link to their own website into the server’s response.

Browser history

This is very similar to the "proxy logs" argument: GET requests are stored in the browser history along with their parameters. The attacker can easily obtain these if they have physical access to the machine.

Browser refresh action

The browser will retry a GET request as soon as the user hits "refresh". It might do that when restoring tabs after shutdown. Any action (say, a payment) will thus be repeated without warning.

The browser will not retry a POST request without a warning.

This is a good reason to use only POST requests for changing data, but has nothing to do with malicious behaviour and, hence, security.

So what should I do?

  • Use only POST requests to change data, mainly for non-security-related reasons.
  • Use only POST requests for login forms; doing otherwise introduces attack vectors.
  • If your site performs sensitive operations, you really need someone who knows what they’re doing, because this can’t be covered in a single answer. You need to use HTTPS, HSTS, CSP, mitigate SQL injection, script injection (XSS), CSRF, and a gazillion of other things that may be specific to your platform (like the mass assignment vulnerability in various frameworks: ASP.NET MVC, Ruby on Rails, etc.). There is no single thing that will make the difference between "secure" (not exploitable) and "not secure".

Over HTTPS, POST data is encoded, but could URLs be sniffed by a 3rd party?

No, they can’t be sniffed. But the URLs will be stored in the browser history.

Would it be fair to say the best practice is to avoid possible placing sensitive data in the POST or GET altogether and using server side code to handle sensitive information instead?

Depends on how sensitive it is, or more specifically, in what way. Obviously the client will see it. Anyone with physical access to the client’s computer will see it. The client can spoof it when sending it back to you. If those matter then yes, keep the sensitive data on the server and don’t let it leave.

Community
  • 1
  • 1
John Gietzen
  • 45,925
  • 29
  • 140
  • 183
  • 30
    ahem, CSRF is just as possible with POST. – AviD Dec 13 '10 at 12:09
  • @AviD It's just slightly more difficult, as you'll also have to incorporate XSS to get someone else to send an undesired POST request. – Lotus Notes Dec 14 '10 at 21:35
  • 5
    @Lotus Notes, it is very slightly more difficult, but you do not need any kind of XSS. POST requests are being sent all the time all over the place, and dont forget the CSRF can be sourced from *any* website, XSS not included. – AviD Dec 15 '10 at 06:00
  • 1
    So, something is more secure because to "hack the planet" I need to know to type in `wget --post-data 'action=delete&id=3' http://example.com/do.php'` instead of simply `wget http://example.com/do.php?action=delete&id=3'` ? That's not more secure at all. It's virtually the same thing. – Incognito Jan 17 '11 at 19:54
  • 18
    no you have to make somebody else with privileges to type it, as opposed to a GET which will be silently fetched by the browser. considering that every POST form should be protected with verifiable source hash, and there's no such means for a GET link, your point is silly. – kibitzer Jan 17 '11 at 20:06
  • 7
    Well, you could add a hash to all your GET requests exactly the same way you add them to POST forms... But you should still not use GET for anything that modifies data. – Eli Jan 17 '11 at 21:07
  • 1
    @kibitzer — What Eli said. Also you could send a link to the admin and when they click it you can fill and submit a form with javascript. POST doesn't functionally offer any extra security than GET, they can both be executed silently within the browser it's just that forum software tends to allow GET requests to third-party sites using images. – DaveJ Jan 18 '11 at 00:55
  • 13
    Using POST over GET doesn't prevent any kind of CSRF. It just makes them slightly easier to do, since it's easier to get people to go to a random website that allows images from urls, than go to a website that you control (enough to have javascript). Doing `
    ` isn't really that hard to submit a post somewhere automatically by clicking a link (that contains that html)
    – FryGuy Jan 18 '11 at 01:53
  • 2
    I cant believe nobody brought this up until I did just now, POST DOES NOT LOG BY DEFAULT, GET DOES LOG, therefor, POST < GET for security. And I mean POST Parameters, sure GET can have them also, but then your talking about very a-typical activity. Sorry about the caps, I didnt mean to scream ;) – RandomNickName42 Apr 26 '11 at 23:49
  • That's because you're talking about specific software, and not POST vs GET - the server software I'm using doesn't log any requests at all for example, and I would never tell people to use Apache or the like anyway if security (or speed, or ease of use, or ...) is of concern. – griffin Aug 16 '13 at 10:07
  • Nobody has mentioned it here, but browsers happily execute GET requests all over injecting code across random domains into clients browser - a script link can be loaded from anywhere ( which is why jsonp could exist and is actively being shunned ) Thats a fairly large attack surface. – kert Jan 28 '14 at 09:38
  • Imagine if a someone manages to quietly poison one of the popular javascript compressed libraries ( jquery ) on a prominent CDN site, both with and without SSL .. browsers dont do any code signature checking or even md5sums. just a http get .. – kert Jan 28 '14 at 09:47
  • Even Google are using GET to change data, the argument that you must not use GET requests to make changes is silly, it is a historical way of how methods in forms were implemented way back in 1990's. GET is sent with the URL, POST is sent with the header. Just choose whatever is good for your applications, there aren't any strict rules here – PalDev Jan 08 '20 at 22:10
213

As far as security, they are inherently the same. While it is true that POST doesn't expose information via the URL, it exposes just as much information as a GET in the actual network communication between the client and server. If you need to pass information that is sensitive, your first line of defense would be to pass it using Secure HTTP.

GET or query string posts are really good for information required for either bookmarking a particular item, or for assisting in search engine optimization and indexing items.

POST is good for standard forms used to submit one time data. I wouldn't use GET for posting actual forms, unless maybe in a search form where you want to allow the user to save the query in a bookmark, or something along those lines.

stephenbayer
  • 11,718
  • 15
  • 59
  • 98
  • 5
    With the caveat that for a GET the URL shown in the location bar can expose data that would be hidden in a POST. – tvanfosson Oct 13 '08 at 18:29
  • 98
    It's hidden only in the most naive sense – davetron5000 Oct 13 '08 at 18:33
  • 8
    true, but you can also say that the keyboard is insecure because someone could be looking over your shoulder when your typing your password. There's very little difference between security through obscurity and no security at all. – stephenbayer Oct 14 '08 at 14:59
  • 1
    As mentioned in this post, if data is going to be submitted and especially if it will be saved, it should not be using GET parameters. Thats one easy way for cross site scripting attacks to take place. – Gavin H Aug 19 '09 at 04:38
  • 68
    The visibility (and caching) of querystrings in the URL and thus the address box is *clearly* less secure. There's no such thing as absolute security so degrees of security are relevant. – pbreitenbach Aug 19 '09 at 05:06
  • @stephenbayer: Suppose that I need to send, say, order id to server to search for the corresponding order details, and then show it to the user for modification. If I use "GET", the order id will be exposed, and if the user is clever enough, he/she may change the order id in the url, which will allow him/her to see orders from other users. What should I do in that case ? – MD Sayem Ahmed Mar 19 '10 at 14:31
  • 6
    it's even exposed if you use a post. in your case, the post would be slightly more secure. But seriously.. I can change post variables all day long, just as easy as get variables. Cookies can even be viewed and modified.. never rely on the information you're site is sending in any way shape or form. The more security you need, the more verification methods you should have in place. – stephenbayer Mar 21 '10 at 14:35
  • 2
    @Night Shade. How about validating that the order belongs to the user before displaying it? If the user is "clever enough" (read, if they can install any of the gazillion browser plugins that allow you modifiy the http request at will), POST doesn't buy you anything. – Juan Pablo Califano Oct 02 '10 at 01:56
  • This answer is totally misleading and dangerous. I can't imagine how it got so many votes. If GET and POST are the same, why do we need them at all? – RocketR Apr 12 '13 at 19:15
  • 5
    @RocketR POST and GET are only verbs, You can apply a few verbs to an HTTP request. The difference is mainly semantics. GET is used to get information, while POST is used to send new information, PUT is used to update information. Well that was the design idea, but people use whatever verbs they want. POST transmissions are no more secure that GET, it is in a different OSI layer that security is handled. It is dangerous to think there is a difference in security between GET and POST. – stephenbayer Apr 13 '13 at 06:22
  • 2
    @stephenbayer Please read the community answer below and you'll know the difference. I don't mean you can be secure just by using POST alone, but it's a necessary part of a bigger strategy. And it is too thoughtless to just say there's no difference without giving more context. I've seen too many Rails-people using a "catch-everything" method `match` in place of specific `get`, `post`, etc, who don't have a clue about CSRF. – RocketR Apr 15 '13 at 10:37
  • 2
    URLs also are frequently logged, for example by nginx. POST is more secure because request bodies are rarely logged. You should never send sensitive information in a query string. – jesse reiss Oct 27 '14 at 01:35
177

You have no greater security provided because the variables are sent over HTTP POST than you have with variables sent over HTTP GET.

HTTP/1.1 provides us with a bunch of methods to send a request:

  • OPTIONS
  • GET
  • HEAD
  • POST
  • PUT
  • DELETE
  • TRACE
  • CONNECT

Lets suppose you have the following HTML document using GET:

<html>
<body>
<form action="http://example.com" method="get">
    User: <input type="text" name="username" /><br/>
    Password: <input type="password" name="password" /><br/>
    <input type="hidden" name="extra" value="lolcatz" />
    <input type="submit"/>
</form>
</body>
</html>

What does your browser ask? It asks this:

 GET /?username=swordfish&password=hunter2&extra=lolcatz HTTP/1.1
 Host: example.com
 Connection: keep-alive
 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/ [...truncated]
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) [...truncated]
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

Now lets pretend we changed that request method to a POST:

 POST / HTTP/1.1
 Host: example.com
 Connection: keep-alive
 Content-Length: 49
 Cache-Control: max-age=0
 Origin: null
 Content-Type: application/x-www-form-urlencoded
 Accept: application/xml,application/xhtml+xml,text/ [...truncated]
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; [...truncated]
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

 username=swordfish&password=hunter2&extra=lolcatz

BOTH of these HTTP requests are:

  • Not encrypted
  • Included in both examples
  • Can be evesdroped on, and subject to MITM attacks.
  • Easily reproduced by third party, and script bots.

Many browsers do not support HTTP methods other than POST/GET.

Many browsers behaviors store the page address, but this doesn't mean you can ignore any of these other issues.

So to be specific:

Is one inherently more secure then another? I realize that POST doesn't expose information on the URL but is there any real value in that or is it just security through obscurity? What is the best practice here?

This is correct, because the software you're using to speak HTTP tends to store the request variables with one method but not another only prevents someone from looking at your browser history or some other naive attack from a 10 year old who thinks they understand h4x0r1ng, or scripts that check your history store. If you have a script that can check your history store, you could just as easily have one that checks your network traffic, so this entire security through obscurity is only providing obscurity to script kiddies and jealous girlfriends.

Over https, POST data is encoded, but could urls be sniffed by a 3rd party?

Here's how SSL works. Remember those two requests I sent above? Here's what they look like in SSL: (I changed the page to https://encrypted.google.com/ as example.com doesn't respond on SSL).

POST over SSL

q5XQP%RWCd2u#o/T9oiOyR2_YO?yo/3#tR_G7 2_RO8w?FoaObi)
oXpB_y?oO4q?`2o?O4G5D12Aovo?C@?/P/oOEQC5v?vai /%0Odo
QVw#6eoGXBF_o?/u0_F!_1a0A?Q b%TFyS@Or1SR/O/o/_@5o&_o
9q1/?q$7yOAXOD5sc$H`BECo1w/`4?)f!%geOOF/!/#Of_f&AEI#
yvv/wu_b5?/o d9O?VOVOFHwRO/pO/OSv_/8/9o6b0FGOH61O?ti
/i7b?!_o8u%RS/Doai%/Be/d4$0sv_%YD2_/EOAO/C?vv/%X!T?R
_o_2yoBP)orw7H_yQsXOhoVUo49itare#cA?/c)I7R?YCsg ??c'
(_!(0u)o4eIis/S8Oo8_BDueC?1uUO%ooOI_o8WaoO/ x?B?oO@&
Pw?os9Od!c?/$3bWWeIrd_?( `P_C?7_g5O(ob(go?&/ooRxR'u/
T/yO3dS&??hIOB/?/OI?$oH2_?c_?OsD//0/_s%r

GET over SSL

rV/O8ow1pc`?058/8OS_Qy/$7oSsU'qoo#vCbOO`vt?yFo_?EYif)
43`I/WOP_8oH0%3OqP_h/cBO&24?'?o_4`scooPSOVWYSV?H?pV!i
?78cU!_b5h'/b2coWD?/43Tu?153pI/9?R8!_Od"(//O_a#t8x?__
bb3D?05Dh/PrS6_/&5p@V f $)/xvxfgO'q@y&e&S0rB3D/Y_/fO?
_'woRbOV?_!yxSOdwo1G1?8d_p?4fo81VS3sAOvO/Db/br)f4fOxt
_Qs3EO/?2O/TOo_8p82FOt/hO?X_P3o"OVQO_?Ww_dr"'DxHwo//P
oEfGtt/_o)5RgoGqui&AXEq/oXv&//?%/6_?/x_OTgOEE%v (u(?/
t7DX1O8oD?fVObiooi'8)so?o??`o"FyVOByY_ Supo? /'i?Oi"4
tr'9/o_7too7q?c2Pv

(note: I converted the HEX to ASCII, some of it should obviously not be displayable)

The entire HTTP conversation is encrypted, the only visible portion of communication is on the TCP/IP layer (meaning the IP address and connection port information).

So let me make a big bold statement here. Your website is not provided greater security over one HTTP method than it is another, hackers and newbs all over the world know exactly how to do what I've just demonstrated here. If you want security, use SSL. Browsers tend to store history, it's recommended by RFC2616 9.1.1 to NOT use GET to perform an action, but to think that POST provides security is flatly wrong.

The only thing that POST is a security measure towards? Protection against your jealous ex flipping through your browser history. That's it. The rest of the world is logged into your account laughing at you.

To further demonstrate why POST isn't secure, Facebook uses POST requests all over the place, so how can software such as FireSheep exist?

Note that you may be attacked with CSRF even if you use HTTPS and your site does not contain XSS vulnerabilities. In short, this attack scenario assumes that the victim (the user of your site or service) is already logged in and has a proper cookie and then the victim's browser is requested to do something with your (supposedly secure) site. If you do not have protection against CSRF the attacker can still execute actions with the victims credentials. The attacker cannot see the server response because it will be transferred to the victim's browser but the damage is usually already done at that point.

Mikko Rantalainen
  • 9,490
  • 8
  • 53
  • 86
Incognito
  • 19,550
  • 15
  • 73
  • 117
  • 1
    A shame you didn't talk about CSRF :-). Is there any way to contact you? – Florian Margaine May 17 '12 at 17:38
  • @FlorianMargaine Add me on twitter and I'll DM you my email. https://twitter.com/#!/BrianJGraham – Incognito May 18 '12 at 01:21
  • Who said Facebook is secure? Good answer though. `+1`. – Amal Murali Nov 09 '13 at 20:26
  • 1
    "[...] so this entire security through obscurity is only providing obscurity to script kiddies and jealous girlfriends.[...]" . this entirely depends on the skills of the jealous gf. moreover, no gf/bf should be allowed to visit your browser history. ever. lol. – turkishweb Jul 04 '16 at 11:55
35

There is no added security.

Post data does not show up in the history and/or log files but if the data should be kept secure, you need SSL.
Otherwise, anybody sniffing the wire can read your data anyway.

Jacco
  • 22,184
  • 17
  • 83
  • 104
  • 2
    if you GET a URL over SSL, a third party will not be able to see the URL, so the security is the same – davetron5000 Oct 13 '08 at 18:33
  • that's correct, nemo. Obviously users can still see the data in the URL. – Eric Wendelin Oct 13 '08 at 18:34
  • 7
    GET information can only be seen at the start and end of the SSL tunnel – Jacco Oct 13 '08 at 18:35
  • 1
    And the sys admins when they grep trough the log files. – Tomalak Oct 13 '08 at 18:36
  • 1
    I would say there is *some* added security in that POST data won't be stored in the user's browser history, but GET data will. – Kip Oct 13 '08 at 20:16
  • 3
    HTTP over SSL/TLS (implemented correctly) allows the attacker sniffing the wire (or actively tampering) to see only two thing -- the IP address of the destination, and the amount of data going both ways. – Aaron Jan 17 '11 at 22:06
29

Even if POST gives no real security benefit versus GET, for login forms or any other form with relatively sensitive information, make sure you are using POST as:

  1. The information POSTed will not be saved in the user's history.
  2. The sensitive information (password, etc.) sent in the form will not be visible later on in the URL bar (by using GET, it will be visible in the history and the URL bar).

Also, GET has a theorical limit of data. POST doesn't.

For real sensitive info, make sure to use SSL (HTTPS)

Andrew Moore
  • 87,539
  • 30
  • 158
  • 173
  • In the default settings, every time I enter a username and password in firefox / IE, it asks me if I want to save this information, specifically so I won't have to type it in later. – Kibbee Oct 13 '08 at 18:50
  • Andrew I think he means auto complete on user input fields. For instance, Firefox remembers all data I enter in my website, so when I begin to type text into a search box it will offer to complete the text with my previous searches. – James McMahon Oct 13 '08 at 18:51
  • Yes, well, that's the point of auto-complete, isn't it. What I meant was the actually history, not auto-complete. – Andrew Moore Oct 13 '08 at 20:11
  • If the attacker can access full browser history, he has access to full browser auto-complete data, too. – Mikko Rantalainen May 13 '13 at 09:38
19

Neither one of GET and POST is inherently "more secure" than the other, just like neither one of fax and phone is "more secure" than the other. The various HTTP methods are provided so that you can choose the one which is most appropiate for the problem you're trying to solve. GET is more appropiate for idempotent queries while POST is more appropiate for "action" queries, but you can shoot yourself in the foot just as easily with any of them if you don't understand the security architecture for the application you're maintaining.

It's probably best if you read Chapter 9: Method Definitions of the HTTP/1.1 RFC to get an overall idea of what GET and POST were originally envisioned ot mean.

Mihai Limbășan
  • 57,251
  • 4
  • 46
  • 59
16

The difference between GET and POST should not be viewed in terms of security, but rather in their intentions towards the server. GET should never change data on the server - at least other than in logs - but POST can create new resources.

Nice proxies won't cache POST data, but they may cache GET data from the URL, so you could say that POST is supposed to be more secure. But POST data would still be available to proxies that don't play nicely.

As mentioned in many of the answers, the only sure bet is via SSL.

But DO make sure that GET methods do not commit any changes, such as deleting database rows, etc.

ruquay
  • 697
  • 1
  • 4
  • 13
7

My usual methodology for choosing is something like:

  • GET for items that will be retrieved later by URL
    • E.g. Search should be GET so you can do search.php?s=XXX later on
  • POST for items that will be sent
    • This is relatively invisible comapred to GET and harder to send, but data can still be sent via cURL.
Ross
  • 43,016
  • 36
  • 114
  • 168
  • But it *is* harder to do a POST than a GET. A GET is just a URL in the address box. A POST requires a
    in an HTML page or cURL.
    – pbreitenbach Aug 19 '09 at 05:10
  • 2
    So a fake post takes notepad and 5 minutes of time... not really much harder. I have used notepad to add features to a phone system that didn't exist. I was able to create a copy of the admin forms for the system that would allow me to assign commands to buttons that "were not possible" as far the vendor was concerned. – Matthew Whited Nov 16 '09 at 19:50
7

This isn't security related but... browsers doesn't cache POST requests.

Daniel Silveira
  • 37,165
  • 32
  • 96
  • 120
6

Neither one magically confers security on a request, however GET implies some side effects that generally prevent it from being secure.

GET URLs show up in browser history and webserver logs. For this reason, they should never be used for things like login forms and credit card numbers.

However, just POSTing that data doesn't make it secure, either. For that you want SSL. Both GET and POST send data in plaintext over the wire when used over HTTP.

There are other good reasons to POST data, too - like the ability to submit unlimited amounts of data, or hide parameters from casual users.

The downside is that users can't bookmark the results of a query sent via POST. For that, you need GET.

edebill
  • 7,365
  • 5
  • 29
  • 31
5

Consider this situation: A sloppy API accepts GET requests like:

http://www.example.com/api?apikey=abcdef123456&action=deleteCategory&id=1

In some settings, when you request this URL and if there is an error/warning regarding the request, this whole line gets logged in the log file. Worse yet: if you forget to disable error messages in the production server, this information is just displayed in plain in the browser! Now you've just given your API key away to everyone.

Unfortunately, there are real API's working this way.

I wouldn't like the idea of having some sensitive info in the logs or displaying them in the browser. POST and GET is not the same. Use each where appropriate.

Halil Özgür
  • 14,749
  • 4
  • 45
  • 55
3
  1. SECURITY as safety of data IN TRANSIT: no difference between POST and GET.

  2. SECURITY as safety of data ON THE COMPUTER: POST is safer (no URL history)

2

The notion of security is meaningless unless you define what it is that you want to be secure against.

If you want to be secure against stored browser history, some types of logging, and people looking at your URLs, then POST is more secure.

If you want to be secure against somebody sniffing your network activity, then there's no difference.

Taymon
  • 22,123
  • 8
  • 58
  • 80
2

It is harder to alter a POST request (it requires more effort than editing the query string). Edit: In other words, it's only security by obscurity, and barely that.

eyelidlessness
  • 58,600
  • 11
  • 86
  • 93
1

Many people adopt a convention (alluded to by Ross) that GET requests only retrieve data, and do not modify any data on the server, and POST requests are used for all data modification. While one is not more inherently secure than the other, if you do follow this convention, you can apply cross-cutting security logic (e.g. only people with accounts can modify data, so unauthenticated POSTs are rejected).

Eric R. Rath
  • 1,829
  • 1
  • 13
  • 16
  • 4
    Actually it isn't a "convention" it's part of the HTTP standard. The RFC is very explicit in what to expect from the different methods. – John Nilsson Oct 13 '08 at 20:01
  • In fact if you allow GET requests to modify state then it's possible a browser that is pre-fetching pages it thinks you might visit will accidentally take actions you didn't want it to. – Jessta Jan 18 '11 at 09:12
1

I'm not about to repeat all the other answers, but there's one aspect that I haven't yet seen mentioned - it's the story of disappearing data. I don't know where to find it, but...

Basically it's about a web application that mysteriously every few night did loose all its data and nobody knew why. Inspecting the Logs later revealed that the site was found by google or another arbitrary spider, that happily GET (read: GOT) all the links it found on the site - including the "delete this entry" and "are you sure?" links.

Actually - part of this has been mentioned. This is the story behind "don't change data on GET but only on POST". Crawlers will happily follow GET, never POST. Even robots.txt doesn't help against misbehaving crawlers.

Olaf Kock
  • 43,342
  • 7
  • 54
  • 84
1

RFC7231:

" URIs are intended to be shared, not secured, even when they identify secure resources. URIs are often shown on displays, added to templates when a page is printed, and stored in a variety of unprotected bookmark lists. It is therefore unwise to include information within a URI that is sensitive, personally identifiable, or a risk to disclose.

Authors of services ought to avoid GET-based forms for the submission of sensitive data because that data will be placed in the request-target. Many existing servers, proxies, and user agents log or display the request-target in places where it might be visible to third parties. Such services ought to use POST-based form submission instead."

This RFC clearly states that sensitive data should not be submitted using GET. Because of this remark, some implementors might not handle data obtained from the query portion of a GET request with the same care. I'm working on a protocol myself that ensures integrity of data. According to this spec I shouldn't have to guarantee integrity of the GET data (which I will because nobody adheres to these specs)

Silver
  • 1,005
  • 2
  • 11
  • 32
1

As previously some people have said, HTTPS brings security.

However, POST is a bit more safe than GET because GET could be stored in the history.

But even more, sadly, sometimes the election of POST or GET is not up to the developer. For example a hyperlink is always send by GET (unless its transformed into a post form using javascript).

magallanes
  • 5,341
  • 4
  • 47
  • 46
1

You should also be aware that if your sites contains link to other external sites you dont control using GET will put that data in the refeerer header on the external sites when they press the links on your site. So transfering login data through GET methods is ALWAYS a big issue. Since that might expose login credentials for easy access by just checking the logs or looking in Google analytics (or similar).

3cho
  • 547
  • 1
  • 6
  • 12
0

Recently an attack was published, that allows man in a middle to reveal request body of compressed HTTPS requests. Because request headers and URL are not compressed by HTTP, GET requests are better secured against this particular attack.

There are modes in which GET requests are also vulnerable, SPDY compresses request headers, TLS also provides an optional (rarely used) compression. In these scenarios the attack is easier to prevent (browser vendors already provided fixes). HTTP level compression is a more fundamental feature, it is unlikely that vendors will disable it.

It is just an example that shows a scenario in which GET is more secure than POST, but I don't think it would be a good idea to choose GET over POST from this attack reason. The attack is quite sophisticated and requires non-trivial prerequisites (Attacker needs to be able to control part of the request content). It is better to disable HTTP compression in scenarios where the attack would be harmful.

Jan Wrobel
  • 6,419
  • 1
  • 31
  • 49
0

GET is visible to anyone (even the one on your shoulder now) and is saved on cache, so is less secure of using post, btw post without some cryptographics routine is not sure, for a bit of security you've to use SSL (https)

kentaromiura
  • 6,229
  • 2
  • 19
  • 15
0

The difference is that GET sends data open and POST hidden (in the http-header).

So get is better for non-secure data, like query strings in Google. Auth-data shall never be send via GET - so use POST here. Of course the whole theme is a little more complicated. If you want to read more, read this article (in German).

Rob Hruska
  • 111,282
  • 28
  • 160
  • 186
Daniel
  • 21
  • 1
0

Disclaimer: Following points are only applicable for API calls and not the website URLs.

Security over the network: You must use HTTPS. GET and POST are equally safe in this case.

Browser History: For front-end applications like, Angular JS, React JS etc API calls are AJAX calls made by front-end. These does not become part of browser history. Hence, POST and GET are equally secure.

Server side logs: With using write set of data-masking and access logs format it is possible to hide all or only sensitive data from request-URL.

Data visibility in browser console: For someone with mallicious intent, it's almost the same efforts to view POST data as much as GET.

Hence, with right logging practices, GET API is as secure as POST API. Following POST everywhere, forces poor API definitions and should be avoided.

kishor borate
  • 120
  • 1
  • 10
0

One reason POST is worse for security is that GET is logged by default, parameters and all data is almost universally logged by your webserver.

POST is the opposite, it's almost universally not logged, leading to very difficult to spot attacker activity.

I don't buy the argument "it's too big", that's no reason to not log anything, at least 1KB, would go a long way for people to identify attackers working away at a weak entry-point until it pop's, then POST does a double dis-service, by enabling any HTTP based back-door to silently pass unlimited amounts of data.

Community
  • 1
  • 1
RandomNickName42
  • 5,721
  • 1
  • 32
  • 33
0

There are no security unless https is used - and with https, the transfer security is the same between GET and POST.

But one important aspect is the difference for client and server when it comes to remembering requests. This is very important to remember when considering GET or POST for a login.

With POST, the password is processed by the server application and then throw away, since a good design would not store passwords - only cryptographically secure hashes - in the database.

But with GET, the server log would end up containing the requests complete with the query parameters. So however good the password hashes in the database are, the server log would still contain passwords in clear text. And unless the file system is encrypted, the server disk would contain these passwords even after the log files have been erased.

The same problem happens when using access tokens as query parameters. And this is also a reason why it is meaningful to consider supplying any access token in the HTTP header data - such as by using a bearer token in the Authorization header.

The server logs are often the weakest part of a web service, allowing an attacker to elevate their access rights from leaked information.

-3

Even POST accepts GET requests. Assume you have a form having inputs like user.name and user.passwd, those are supposed to support user name and password. If we simply add a ?user.name="my user&user.passwd="my password", then request will be accepted by "bypassing the logon page".

A solution for this is to implement filters (java filters as an e) on server side and detect no string queries are passed as GET arguments.

-3

This is an old post, but I'd like to object to some of the answers. If you're transferring sensitive data, you'll want to be using SSL. If you use SSL with a GET parameter (e.g. ?userid=123), that data will be sent in plain text! If you send using a POST, the values get put in the encrypted body of the message, and therefore are not readable to most MITM attacks.

The big distinction is where the data is passed. It only makes sense that if the data is placed in a URL, it CAN'T be encrypted otherwise you wouldn't be able to route to the server because only you could read the URL. That's how a GET works.

In short, you can securely transmit data in a POST over SSL, but you cannot do so with a GET, using SSL or not.

LVM
  • 11
  • 1
  • 4
  • 4
    This is completely untrue. SSL is a transport layer protocol. It connects to the server FIRST, then sends ALL Application data as an encrypted binary stream of data. Check out this thread: http://answers.google.com/answers/threadview/id/758002.html – Simeon G Feb 28 '12 at 16:53
  • Do a TCPDump and you'll see that this is 100% true. In order to connect to the server, you have to send your request unencrypted. If you do that as a GET, your args get added to the initial request and are therefore unencrypted. Regardless of what you see in the post you linked, you can test this with TCPDump (or similar). – LVM Mar 08 '12 at 15:31
  • 1
    Done! Just ran tcpdump on my Mac. And your answer came up 100% false. Here's the command I used: sudo tcpdump -w ssl.data -A -i en1 -n dst port 443 Then when I looked in ssl.data of course I saw gobly gook. All HTTP data was encrypted. And to make sure a normal non-ssl call worked, I used: sudo tcpdump -w clear.data -A -i en1 -n dst port 80 And sure enough, inside clear.data I had all headers and URIs showing in the clear. I tested this on my gmail and google plus (which are HTTPS) and on some non SSL pages like google.com. – Simeon G Mar 11 '12 at 03:18
  • I'm not a network expert so if you think I used the wrong commands on tcpdump please feel free to correct me. – Simeon G Mar 11 '12 at 03:21
  • I don't have the command offhand, but you can also check it with Wireshark/Ethereal. – LVM Mar 14 '12 at 16:43
-5

Post is the most secured along with SSL installed because its transmitted in the message body.

But all of these methods are insecure because the 7 bit protocol it uses underneath it is hack-able with escapement. Even through a level 4 web application firewall.

Sockets are no guarantee either... Even though its more secure in certain ways.

drtechno
  • 268
  • 2
  • 9