Skip to content

Dependency Injection Containers

I got into a discussion on Twitter the other day where I mentioned that I don't like DI. Call it lack of sleep or language barrier (on my part), but I said DI — dependency injection — when I meant the dependency injection container. Having said this, let me explain why I don't like it.


Despite not working for any of the larger PHP joints out there, I get to spend my time with pretty interesting stuff. I wouldn't call it high traffic or big data, but being in the Top 100 websites, we're certainly not the average PHP application out there.

I've shared some numbers perviously, and also in our job posting — don't mean to compare size, but my perspective is just that.

Looking at anything, I'm compelled to ask myself:

  • How much time will it save during development?
  • How much time will take to educate my team?
  • How much time will it eat away in each request?
  • How much time will it cost me to rip it out when it's slow?

Yeah, I'm still crazy about technology, but I've also grown up to not just submit to any of it without thorough evaluation. In that respect, I've grown old(er) — but that's (IMHO) healthy sceptism and usually what people call experience.


At the expense of my own credibility, let me just say (again) that I'm not particulary against new things. My beef is the hype.

The fact is that we use a lot of new technology and I have to admit that it's pretty exciting for me when to try out new things. Trying out doesn't mean that we end up using it, but when we try out we take a close look.

On the newer side of things we ended up using CouchDB, Redis, Membase and ElasticSearch. On one or two of these we have even started hacking and all in all we follow their development closely.

Telling people about what we run, sometimes feels like I run a playground for developers and crazy-about-tech people. It's new and it's bleeding edge, and it may be true to a certain extend, but it also the opposite because we (believe we) found great use-cases for each of these things.

But getting back to patterns, and I believe these prove my point even better than going on about these databases: Be honest to yourself and let's define "new" in this case and in general. Because neither dependency injection nor containers, or NoSQL are exactly new.

I don't want to rain on anyone's parade, but get real — it's been said and done before.

VirtualBox Guest Additions and vagrant

If you followed my blog, you probably know about chef and vagrant.

So the other day I managed to upgrade to VirtualBox 4.0. The upgrade just happened by accident, so to speak. I noticed that Virtualbox 4.0 had moved from nonfree to contrib on Oracle's repository which is why I had previously missed it. With 4.0, I am now able to run the latest and greatest Vagrant — and with Vagrant being pre-1.0 it's safe to assume that newer is indeed better.

Anyway — the VirtualBox update fubar'd all of my boxes. They didn't outright refuse to start, but provisioning failed nine out of ten times and other random issues seemed to pile up.

I noticed this warning about the Guest Additions inside the box and how they are different from the VirtualBox installed on the host:

[default] The guest additions on this VM do not match the install version of
VirtualBox! This may cause things such as forwarded ports, shared
folders, and more to not work properly. If any of those things fail on
this machine, please update the guest additions and repackage the

Guest Additions Version: 3.y.z
VirtualBox Version: 4.0.8

To cut to the chase — once I updated them and the box itself, they all continued to work.

So here's how!

Boot a blank VM

So first off, you take the image of your choice and boot a blank virtual machine. Blank because no chef-recipes should be configured, etc.. This VM should literally boot in a minute or so.

Now enter the VM and update the Guest Additions, and while we're at it, feel free to update the OS etc. as well.

Update the base

On my Ubuntu Karmic box, this is what I did (inside the VM) first:

vagrantup:~$ sudo aptitude update
vagrantup:~$ sudo aptitude upgrade
vagrantup:~$ sudo gem update --no-rdoc --no-test --no-ri
vagrantup:~$ sudo aptitude install portmap nfs-common

Pro-tip: The last step is necessary in case you run into the various issues with VirtualBox shared folders. The official website lists performance as the #1 one reason to use NFS shares instead of the VirtualBox shares, my #1 reason was issue #351.

Update Guest Additions

It took me a lot of Google, to actually find the download — regardless this is what seems to work. I extracted these steps from the blog post I found. For this to work, you are still inside the VM:

vagrantup:~$ wget -c \ \
-O VBoxGuestAdditions_4.0.8.iso
vagrantup:~$ sudo mount VBoxGuestAdditions_4.0.8.iso -o loop /mnt
vagrantup:~$ sudo sh /mnt/ --nox11
vagrantup:~$ rm *.iso

There's an error about something failing to install (Windows-related), but I ignored it since I don't run the GUI anyway.

Pro-tip: Installing the latest vagrant along with Virtualbox 4.0.8 made the GUI mode fail consistently. Not that I really mind, but I feel compelled to share.


Once these steps are completed, exit the VM (ctrl+d), halt the VM and up to test. If the VM starts up and no warnings show up, you got yourself a winner.

To repackage the VM, do the following:

~/karmic-test$ vagrant package

This creates a file — I renamed mine to Import the box with:

~/karmic-test$ vagrant box add karmic-nfs 
[vagrant] Downloading with Vagrant::Downloaders::File...
[vagrant] Copying box to temporary location...
[vagrant] Extracting box...
[vagrant] Verifying box...
[vagrant] Cleaning up downloaded box...



That's all, in summary the biggest obstacles are as follows:

  1. VirtualBox on the host and VirtualBox Guest Additions inside the box need to match (closely).
  2. The box needs to be kept up to date — more or less recent software inside the box makes bootstrapping and provisioning easier and faster.

Yahoo: oauth_problem=consumer_key_rejected

Here's how I literally wasted eight hours of my life. :-)

We signed up for Yahoo! Search Boss last week. The process itself was pretty straight:

  1. Sign into your Yahoo! account at
  2. Click "New Project", fill out the form.
  3. Then click on the project name, activate "Yahoo! Search Boss" by suppling some billing info.

Consumer key rejected?

The above process doesn't even take five minutes, but then I spent eight hours figuring out what oauth_problem=consumer_key_rejected means. Spent a couple hours googling, reading bug reports and even posted to the Yahoo! group associated with Search Boss.

To cut to the chase: When you create a new project, it's not sufficient to just activate "Yahoo! Search Boss" (and provide billing details and so on).

You have to select another (pointless) API along with it. Even if you don't use it. Apparently a consumer key and secret are only put into use when you selected one of their other offerings. For some reason, Yahoo! Search Boss doesn't work by itself.

So once I selected one of their other APIs (I went for Knowledge Plus), I copied my updated consumer secret, it magically worked.

Add to all of the above the constant link screw up in Yahoo!'s developer documentation. Since there's a v1 and a v2 of the API you're bound to find the wrong one for sure — the best way to find it: Google.

Small working example for request tokens?

Sure, so to get a request token, this is what you do with Zend_Oauth:

$consumer = new Zend_Oauth_Consumer($config);
$token    = $consumer->getRequestToken();

$config is an instance of Zend_Config. In addition to the oauth.ini later in the blog post, you'll also need requestTokenUrl to retrieve a request token.

The result ($token) is of Zend_Oauth_Token_Request.

Nice to know: In the process I discovered that in case an access token was needed, I had to make sure that $token contains oauth_verifier. This verifier is only supplied when you supply a callbackUrl (in your ini file). If you supply oob (out of bounds — no callback), the verifier has to be entered by the user manually later on.

Consumers and Tokens

And the best part of this discovery is, that the Search Boss API doesn't use any OAuth tokens at all.

I guess I must have overread that in the process and only figured it out when I started looking at the sample OAuth code, or rather stepped through it all to figure out what it is doing (and what I was doing wrong). (More on that later!)

But anyway, I'm not afraid to learn new things.

Two for the price of one!

So once I mastered the credentials, it lead me to another thing: OAuth implementations in PHP are pretty sucky, and PHP doesn't seem to be alone here either. :-) And even if the implementation is not so sucky, their documentation usually is and real life examples are rather sparse.

So for example, I honestly don't know why developers who build OAuth wrappers rather explain OAuth for the who knows how many time, instead of providing example code. Example code goes far, and for reading yada yada just link to :-)

In case of the Zend Framework (and all things Zend_Oauth), I used the components unit tests to figure out a direction though that didn't take me too far.

Da codez.

In the end, I wrapped OAuth.php into a unit test and wrote some code using Zend_Oauth_Http_Utility until the requests looked the same. Here's what I ended up with!


requestMethod   = 'GET'
signatureMethod = 'HMAC-SHA1'
consumerKey     = 'YOUR CONSUMER KEY'
consumerSecret  = 'YOUR CONSUMER SECRET'
version         = '1.0'

For simplicity, I'm not showing [testing] etc..

Nice to know: In case you're wondering how I came up with all the names, have a look into Zend_Oauth_Config::setOptions(). I particulary enjoyed looking for that one via an undocumented Zend_Oauth_Client::__call(). If you were wondering what @method is for in phpdoc, this is exactly it. ;-)

Make it all work!

So first off, we read the configuration file and push it all into an array $oauthParams.

We also initialize another array called $query (which is later appended to the URL).

Last but not least, we define $url, which is the address of the Search Boss endpoint.

$oauthCfg = new Zend_Config('oauth.ini', 'production');

$oauthParams = array(
    'oauth_version'          => $oauthCfg->version,
    'oauth_nonce'            => sha1(time() + rand(0,10)),
    'oauth_timestamp'        => time(),
    'oauth_consumer_key'     => $oauthCfg->consumerKey,
    'oauth_signature_method' => $oauthCfg->signatureMethod,

$query = array(
    'q'      => 'search query for yahoo boss',
    'format' => 'json',

$url = '';

Both arrays are combined into $params for signature creation:

$params = array_merge($oauthParams, $query)

$utility   = new Zend_Oauth_Http_Utility;
$signature = $utility->sign(

Then the signature is added to $oauthParams and we'll convert the array into an Authorization header:

$oauthParams['oauth_signature'] = $signature;

$oauthHeader = '';
foreach ($oauthParams as $key => $value) {
    $oauthHeader .= rawurlencode($key) . '="' . rawurlencode($value) . '",';
$oauthHeader = 'Authorization: OAuth ' . substr($oauthHeader, 0, -1);

Last but not least, we do the request!

First we assemble the complete URL, then we assign all the variables to the $client object and finally do a request!

$url = $url . '?' . http_build_query($query);

$client   = new Zend_Http_Client;
$response = $client->setUri($url)
var_dump($response->getBody()); // json here!

If all went well $response contains an instance of Zend_Http_Response.

I hope I didn't forget anything, but it should be relatively straight-forward to wrap this into a lightweight object oriented wrapper.

Other pointers:

  • always evaluate the status code (hint: REST-API)
  • use try/catch


That's all. Sure hope this saves someone else some time.