Skip to content

Zend Framework: Slow automatic view rendering

So I posted something on Twitter today, which wasn't exactly news to me. I was more or less pointing out the obvious.

From a couple follow-up tweets, I guess, I need to explain more.

The idea

My thesis is that there's a gain in page rendering time when I disable automatic view rendering and use explicit render calls ($this->render('foo');) inside my controllers. And to cut to the chase, there is. On our app, I measured a 12% improvement using Xdebug's profiler — simple before-after-style.

General setup

I've blogged about Zend Framework performance before (1, 2, 3). Our setup is not the average Zend Framework quickstart application. We utilize a custom (much faster) loader (my public open source work in progress), no Zend_Application and explicit (vs. implicit) view rendering. The framework code is at 1.10.2. On the server-side, the application servers are nginx+php(-cgi).

I don't feel like repeating myself and while a lot of issues were already addressed in new releases of the framework, or are going to be addressed in 2.0, the above links still hold a lot of truth or at least inside and pointers if you're interested in general PHP performance (in German).


IMHO, it doesn't really matter how the rest of your application looks like. Of course all applications are different and that's why I didn't say, "OMG my page rendered in 100 ms", but instead I said something like, "we got a 10+% boost". The bottom line is that everyone wants to serve fast pages and get the most out of their hardware but since applications tend to carry different features there really is no holy grail or number to adhere to.


I urge everyone to double-check my claim. After all, it's pretty simple:

  1. Setup Xdebug
  2. Profile the page
  3. Restart PHP app server/processes (in case you use APC and/or php-cgi)
  4. Disable automatic view rendering: $this->_helper->viewRenderer->setNoRender(true);
  5. Add render() call: $this->render('foo');
  6. Profile again

... simple as that.


All in all this thing doesn't require too much to follow.

Automatics — such as an automatic view renderer — add convenience which results in rapid development and hopefully shorter time to market. But they do so almost always (give it nine Erlang nines ;-)) at the expense of performance.

Update, 2010-03-20 21:37: As Rob pointed out, there's even more to gain by bypassing the helper entirely. Use the code snippet below, or consider something like the following:

Padraic also blogged extensively on Zend_Controller_Action_Helper_ViewRenderer, I recommend reading Having a bad ViewRenderer day in your ZF app?.

Redis on Ubuntu (9.04)

A small howto to get the latest redis-server and a webinterface on Ubuntu.


$ wget
$ sudo dpkg -i redis-server_1.2.5-1_amd64.deb
$ /etc/init.d/redis-server start

... redis should listen on localhost:6379.

You may need to get i386 instead of amd64 if you run 32bit.


You may need to add the following to /etc/sysctl.conf:

vm.overcommit_memory = 1

... that is, especially if you run in a VE (e.g. inside xen).

All other configs are in /etc/redis/redis.conf.


Because web interfaces are so simple, I decided to get redweb.


dpkg -i python-support_1.0.7_all.deb
dpkg -i python-redis_1.34.1-1_all.deb

So, on Ubuntu, python-support is at 0.8.4 currently, but we'll need something equal or greater than 0.9.0. This is why I update python-support from Debian.


git clone ./redweb-git

Patch redweb-git/redweb/ with:

index e79a062..e278fca 100644
--- a/redweb/
+++ b/redweb/
@@ -15,6 +15,8 @@ __author__ = 'Ted Nyman'
__version__ = '0.2.2'
__license__ = 'MIT'
+import sys
from bottle import route, request, response, view, send_file, run
import redis


cd redweb-git/redweb/

... this is a bit annoying. If you do python redweb/, it'll complain about missing files.

Then browse to


So this is my redis-server howto — nice and simple.

And once you have Redis up and running, feel free to browse over to Rediska and use their session handling for Zend Framework. Setup is pretty simple and it works like a charm. :-) I'd suggest you use their trunk code, which is hosted on Github as it will contain a few improvements and a small bugfix which I did.

For more on Rediska, watch this space. ;-)

PHP parse errors with cgi and nginx

So for whatever reason, it took me a while to figure this out earlier today:

2010/03/15 15:44:16 [info] 22274#0: *148224 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: a.a.a.a, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/.fastcgi.till/socket:", host: "localhost"
2010/03/15 15:44:16 [info] 22274#0: *148207 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: a.a.a.a, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/.fastcgi.till/socket:", host: "localhost"

The issue was a PHP parse error which I overlooked when I added a new file. The weird thing is, I had nothing in the logs (E_ALL, display_errors is off, but all logs are enabled and I tailed them using multitail) and nginx only displayed a blank page. The errors above were in nginx's own log file.


Update, 2010-03-04: I just rolled a 0.0.2 release. In case you had 0.0.1 installed, just use pear upgrade-all to get it automatically. This release is trying to fix a random hang while reading documents from the source server.

I also opened a repository on Github.


As some may have guessed from a previous blog post we are currently running a test setup with CouchDB lounge. My current objective is to migrate our 200 million documents to it, and this is where I am essentially stuck this week.

No replication, no bulk docs

The lounge currently does not support replication (to it) or saving documents via bulk requests, so in essence migrating a lot of data into it is slow and tedious.

I have yet to figure out if there is a faster way (Maybe parallelization?), but DB_CouchDB_Replicator is the result of my current efforts.

I think I gave up on parallelization for now because it looked like hammering the lounge with a single worker was already enough, but generally I didn't have time to experiment much with it. It could have been my network connection too. Feedback in this area is very, very appreciated.


DB_CouchDB_Replicator is a small PHP script which takes two arguments, --source and --target. Both accept values in style of http://username:[email protected]:port/db and attempt to move all documents from source to target.

Since long running operations on the Internet are bound to fail, I also added a --resume switch, and while it's running it outputs a progress bar, so it should be fairly easy to resume. And you also get an idea of where it's currently at and how much more time it will eat up.

These switches may change, and I may add more — so keep an eye on --help. Also, keep in mind, that this is very alpha and I give no guarantees.


Installation is simple! :-)

apt-get install php-pear
pear config-set preferred_state alpha
pear channel-discover
pear install

Once installed, the replicator resides in /usr/local/bin or /usr/bin and is called couchdb-replicator.


The code is not yet on github, but will eventually end up there. All feedback is welcome!