PHP performance III -- Running nginx

If you enjoyed this article, please leave a comment, rss subscribe to my RSS feed and/or follow me on Twitter. Thank you very much!

Since part one and two were uber-successful, here's an update on my Zend Framework PHP performance situation. I've also had this post sitting around since beginning of May and I figured if I don't post it now, I never will.

Disclaimer: All numbers (aka pseudo benchmarks) were not taken on a full moon and are (of course) very relative to our server hardware (e.g. DELL 1950, 8 GB RAM) and environment. The application we run is Zend Framework-based and currently handles between 150,000 and 200,000 visitors per day.

Why switch at all?

In January of this year (2009), we started investigating the 2.2 branch of the Apache webserver. Because we used Apache 1.3 for forever, we never had the need to upgrade to Apache 2.0, or 2.2. After all, you're probably familiar with the don't fix it, if it's not broken-approach.

Late last year we ran into a couple (maybe) rather FreeBSD-specific issues with PHP and its opcode cache APC. I am by no means an expert on the entire situation, but from reading mailing lists and investigating on the server, this seemed to be expected behavior — in a nutshell: Apache 1.3 and a large opcode cache on a newer versions of FreeBSD (7) were bound to fail with larger amounts of traffic.

We tried bumping up a few settings (pv entries), but we just ran into the same issue again and again.

Because the architecture of Apache 2.2 and 1.3 is so different from one another (and upgrading to 2.2 was the proposed solution), I went on to explore this upgrade to Apache 2.2. And once I completed the switch to Apache 2.2, my issues went away.

So far, so good!

Performance?

On the performance side we experienced rather mediocre results.

While we benched that a static file could be read at around 300 requests per second (that is a pretty standard Apache 2.2 install, sans a couple unnecessary modules), PHP (mod_php) performed at a fraction of that, averaging between 20 and 23 requests per second.

Myth: Hardware is cheap(, developer time is not)!

Before some people yell at me for trying to optimize my web server, one needs to take the costs of scaling (to a 100 requests per seconds) into account.

One of those servers currently runs at 2,600.00 USD. The price tag adds up to an additional 10,400.00 USD in order to scale to a 100 (lousy) requests per seconds. Chances are of course, that the hardware is slightly less expensive since DELL gives great rebates — but the 8 GB of (server) RAM and the SAS disks by themselves are melting budgets away.

And on top of all hardware costs, you need add setup, maintenance and running costs (rack space, electricity) for an additional four servers — suddenly, developer time is cheap. ;-)

So what do we do? Nginx to the rescue?!

php-cgi

I had tried php-cgi before. My experience with it was in fact so devastating that I wanted to avoid running php-cgi at (almost) all costs! The last time (November 2006) I tried lighttpd and php-cgi (fcgi, of course), I managed to get the server up and running, but it kept breaking under heavy traffic.

Back then, Apache 1.3 and mod_php were the very, very solid and stable solution.

I remember my Thanksgiving weekend (2007) in Chicago where I wasted basically three days inside trying to figure out why my web servers went down. This situation is clearly not anyone's idea of a chill weekend.

Over the course of the weekend I (believe I) read virtually every thread and bug report on the Internet which described my issue but was left open with no proposed solution.

As for Nginx itself — we have been using it as a reverse proxy in front of Apache (to spoon-feed Apache) for a good while (8-9 months) and getting rid off another webserver (factor in the equation) in order to simplify the setup and make server administration and management easier is always very tempting.

Setup

The following are the steps involved to get setup on FreeBSD (The best OS ever!).

Installation

(portsnap fetch update)
cd /usr/ports/www/nginx && make install distclean

Side note: In case you never dealt with ports before, make sure to portsnap fetch extract first and also read the handbook before bothering anyone. :-) Just like the PHP manual, this is a particularly great piece of documentation which is available in many different languages and answers virtually every initial question.

In case you have PHP (mod_php) installed already, I suggest the following upgrade path:

cd /usr/ports/lang/php5 && make rmconfig
make install

Make sure to select PHP and cgi related options when the menu pops up. I selected all of them anyway.

To upgrade all your PHP extensions to the new SAPI, please use portupgrade:

portupgrade -rf lang/php5

This prompts a rebuild of lang/php5 (I know we just did that) and all ports that depend on it.

Get a coffee, let it sit for a while, and you're done.

Enable nginx and php-cgi

Assuming you haven't installed nginx already, add the following to /etc/rc.conf:

nginx_enable="YES"

For php-cgi, please add:

phpfcgid_enable="YES"

Then place the following start script (phpfcgid) in /usr/local/etc/rc.d/:

    #!/bin/sh

    # PROVIDE: phpfcgid
    # REQUIRE: LOGIN
    # KEYWORD: shutdown

    . /etc/rc.subr

    name="phpfcgid"
    rcvar=`set_rcvar`

    load_rc_config $name
    : ${phpfcgid_enable="NO"}
    : ${phpfcgid_users="www"}
    : ${phpfcgid_children="2"}
    : ${phpfcgid_tmpdir="/tmp"}
    : ${phpfcgid_requests="500"}

    restart_cmd=phpfcgid_restart
    start_cmd=phpfcgid_start
    stop_cmd=phpfcgid_stop

    phpfcgid_start() {
        echo "Starting $name with ${phpfcgid_children} children (req: ${phpfcgid_requests})."
        export PHP_FCGI_CHILDREN=${phpfcgid_children}
        export PHP_FCGI_MAX_REQUESTS=${phpfcgid_requests}
        for user in ${phpfcgid_users}; do
            socketdir="${phpfcgid_tmpdir}/.fastcgi.${user}"
            mkdir -p ${socketdir}
            chown ${user}:www ${socketdir}
            chmod 0750 ${socketdir}
            su -m ${user} -c "/usr/local/bin/php-cgi -b ${socketdir}/socket&"
        done
    }

    phpfcgid_stop() {
        echo "Stopping $name."
        pids=`pgrep php-cgi`
        pkill php-cgi
        wait_for_pids $pids
    }

    phpfcgid_restart() {
        phpfcgid_stop
        phpfcgid_start
    }

    run_rc_command "$1"

Full disclosure: I found the script on a FreeBSD mailing list. I fixed a couple typos and added phpfcgid_tmpdir, phpfcgid_requests and a restart command.

Putting the above script in the path (/usr/local/etc/rc.d) makes sure that your php-cgi processes are (re)started when the server is rebooted.

It also provides you with a somewhat convenient way to manually start, stop and restart them. (Sidenote: For additional convenience, check out this post about php-fpm.)

The script will accept the following parameters in /etc/rc.conf:

  • phpfcgid_users
  • phpfcgid_children
  • phpfcgid_tmpdir
  • phpfcgid_requests

Aside from phpfcgid_users, the settings should speak for themselves — in doubt, please comment.

phpfcgid_users allows you to start multiple instances for different users on your system. This is especially interesting if you're into shared hosting and want to seperate the user's PHP processes from one another.

Since php_value (and php_admin_value) will not work with php-cgi, starting different instances of PHP, will allow you to customize them, e.g. by passing in a customized php.ini file (for example: php-cgi -c /path/to/php.ini).

Nginx configuration

The configuration is pretty straight forward. For completeness, here is my nginx.conf.

Please note that my nginx is not used to both serve static and PHP files in general. It's configured to serve one Zend Framework application only. Except for the server error page, all of our static files are served from a CDN.

    worker_processes  8;

    error_log  /dev/null;

    events {
        worker_connections  1024;
    }

    http {
        include       mime.types;
        default_type  application/octet-stream;

        sendfile           on;
        tcp_nopush         on;
        tcp_nodelay        on;
        keepalive_timeout  0;

        access_log /dev/null;

        server {
            listen       PUBLIC.IP:80;
            server_name  web06.example.org

            error_page 404 /index.php;

            location / {
                include /usr/local/etc/nginx/fastcgi.conf;

                root /usr/example.org/www;

                fastcgi_pass  unix:/tmp/.fastcgi.www/socket;
                fastcgi_index index.php;

                fastcgi_param SCRIPT_FILENAME /usr/example.org/www/index.php;
            }

            error_page 500 502 503 504  /50x.html;
            location = /50x.html {
                root /usr/example.org/www/error;
            }

            location ~ /\.ht {
                deny all;
            }
        }

        server {
            listen       PRIVATE.IP:80;
            server_name  web06.example.org;

            error_page 404 /index.php;

            location / {
                include /usr/local/etc/nginx/fastcgi.conf;

                root /usr/example.org/www;

                fastcgi_pass  unix:/tmp/.fastcgi.www/socket;
                fastcgi_index index.php;

                fastcgi_param SCRIPT_FILENAME /usr/example.org/www/index.php;
            }

            error_page 500 502 503 504  /50x.html;
            location = /50x.html {
                root   /usr/example.org/www/error;
            }

            location ~ /\.ht {
                deny  all;
            }
        }
    }

... and my fastcgi.conf:

    # /usr/local/etc/nginx/fastcgi.conf
    fastcgi_param QUERY_STRING    $query_string;
    fastcgi_param REQUEST_METHOD  $request_method;
    fastcgi_param REQUEST_URI     $request_uri;
    fastcgi_param CONTENT_TYPE    $content_type;
    fastcgi_param CONTENT_LENGTH  $content_length;
    fastcgi_param SERVER_NAME     $server_name;
    fastcgi_param HTTP_HOST       $http_host;
    fastcgi_param SERVER_PROTOCOL $server_protocol;
    fastcgi_param REMOTE_ADDR     $remote_addr;
    fastcgi_param REMOTE_PORT     $remote_port;
    fastcgi_param SERVER_ADDR     $server_addr;
    fastcgi_param SERVER_PORT     $server_port;
    fastcgi_param SERVER_NAME     $server_name;
    fastcgi_param DOCUMENT_URI    $document_uri;
    fastcgi_param DOCUMENT_ROOT   $document_root;
    fastcgi_param REDIRECT_STATUS 200;

My public vhost allows me to access the server directly. Make sure to secure this with a firewall if that is not what you want. Or drop it. The private vhost is used from my load balancer who proxies requests to the servers.

Zend Framework URL-rewriting

I noticed that a lot of people seem to have issues with that, see the above config to review the bits and pieces that work for me.

Specifically:

    ...
    error_page 404 /index.php;
    ...
        fastcgi_param SCRIPT_FILENAME /usr/example.org/www/index.php;

Benchmarks?!

The initial benchmarks after switching from Apache 2.2 with mod_php to nginx with php-cgi report 120+ requests/second. This is before touching any of the kernel parameters and using any of the recipes provided on the nginx wiki (Great bookmark!).

Yeah, point very well taken: the application might still be slow, but getting over a 500% more from the same hardware with a new piece of software, is still awesome.

Configuration:

  • the the above config
  • php-cgi was started with phpfcgid_children="10" and phpfcgid_requests="500"
  • ab was run on another server, connect via a switch using GBit ethernet

nginx+php-cgi vs. apache 1.3+mod_php

When I performed the same update on some of our older (older hardware, FreeBSD 6.x) web servers the difference was subtle. Before the update, I ran Apache 1.3 with mod_php and I was able to get between 50 and 55 requests per second. Switching to nginx, I gained maybe one or two requests per seconds, tops.

So what changed?

Comparing this experience to the last time I tried running php-cgi, it could not have been any more different.

One of the fundamental differences is that last time I setup the php-cgi process to listen on a port (127.0.0.1:xxxx) since most, if not all howto's available online describe this setup. This time I went for sockets.

With no offense meant, but I suspect that the majority of people not running php-cgi on sockets also doesn't have a lot of traffic. From my past experience (in 2007) and more recent experiments, I noticed that whenever I set up php-cgi without sockets, the PHP processes and the webserver loose track of one another after a while. That is, especially when they're pounded.

Anyway — the above is a very uneducated guess so don't quote me on it. In general I also suspect that a) a lot changed since 2007 (anyway) and (well) b) using sockets is a vital factor as of why this setup works,. Of course I've also personally learned a few more tricks and can't rule out human error. :-)

Conclusion

Well, my setup is my setup, is my setup. I also wanted to have more pretty graphs and more numbers, but I had no time.

All the above information is very specific to the circumstances and of course to the application which we run. The PHP code has plenty of room for improvement, however currently none of this magnitude — with a switch in web servers. Unless of course I ditch the framework and rebuild it all from scratch.

I hope this post gets people interested in whatever else there is outside the box. For example, a new web server, or maybe a new kind of database? ;-)

As a disclaimer for some @apache.org friends — of course I (still) love Apache! No questions asked. To figure out the real boost, one would have to compare the current setup (nginx with php-cgi) to Apache 2.2 with php-cgi. It's just that nginx wins the configuration game, hands down!

| More