RFC: CouchDB on FreeBSD

Friday, May 15. 2009

Thanks to Wesley, we recently managed to update CouchDB's FreeBSD port to the official 0.9.0 release.

My current TODO for the port includes:

  • a super-cool rc-script (currently, there is none)
  • automatic user setup/creation (couchdb)
  • patching of the install/source to use BSD-style directories for the database (e.g. /var/db/couchdb).

In regard to the the rc-script, I continued on a work in progress and committed an idea on Github. This work in process (couchdb) works out of the box. Just download the file, put it in /usr/local/etc/rc.d/, chmod +x couchdb and done. I also updated the CouchDB wiki page explaining how CouchDB on FreeBSD is setup (e.g. the options of the rc-script) and updated the instructions on user creation — something that I plan to roll into the couchdb port ASAP.

But before I continue on the port, I'd like to ask for feedback from people (Probably, you!) who use CouchDB on FreeBSD. For example, are you happy with the current options, or do you need a different set, etc.. If this work in progress is what you need, then that's valuable feedback as well.


Drobo with DroboShare on XP, Vista, MacOSX, Ubuntu

Saturday, January 17. 2009

I bought a Drobo for myself about seven months ago and I couldn't be any happier. My files are backed up on a RAID system, I still got plenty of space to waste. My world is OK.

Some friends of mine recently bought one of the new Drobo units with a DroboShare. The DroboShare costs $200 (USD) and is a glorified Linux server which exports your Drobo using Samba to all clients on the network.

My friends are using Windows and MacOSX to connect so after some intial problems where they were running FAT32 and the Drobo decided to go unlabeled, we decided to format the unit and use HFS+ instead.


I googled this and to my surprise there are no information available - Data Robotics keeps it all pretty well hidden behind case numbers on their ticketing system. It would be nice if they provided more details why a Drobo unit would end up in unlabeled state.

HFS, or what's your flavour?

Our reasons to select HFS+ are:

  • It's a modern filesystem (vs. FAT32) with journaling.
  • If all fails, you can hook it up to the Mac and use DiskWarrior to recover the volume.

If you are not using a Mac and keep in a Windows-only environment, it makes more sense to select NTFS, in a Linux-only environment you are save with ext3. I would select a filesystem which still allows you to hook up the Drobo to any of your clients in order to be able to easily recover the volumes in case they decide to stop working.

Network write issues

When we setup the Drobo we tried to copy 2 GB from various clients to it. What took between 7-9 minutes on most clients, was estimated with 30 hours on Tiger. ;-)

To rule out an issue with the Drobo, we briefly tested the performance from various systems. The candidates included Windows XP, Windows Vista, MacOSX 10.5.4 (Leopard), Ubuntu 8.10 and MacOSX 10.4.11 (Tiger). The Drobo performed well on all systems -- except for Tiger.

Researching the network issue on Google, I found various people who reported all kind of network issue with 10.4.11 and since there are other random crashes on the same workstation we pronounced the Drobo to work and perform. The workstation is subject to a system overhaul next week.


Setting up the Drobo on Ubuntu is pretty easy -- point taken, there is no Drobo Dashboard and the drobo-utils only supported units which are connected via USB or Firewire directly to the workstation.

To setup the share you could samba mount \\DroboShare\DROBO and provide the same credentials you use on Windows/Mac when the Drobo Dashboard prompts you for a login to the DroboShare. An alternative with GUI is to use Places > Connect To Server.

The DroboShare itself exposes itself on the network and registers itself as (well) DroboShare in DNS/WINS (netbios?) -- if you decide to use its IP to setup shares and so on, make sure to assign a static IP to the DroboShare (MAC-Address is on the bottom side of it) so the IP doesn't change when you restart your router or the lease expires.

Hope this helps!

PEAR & Plesk

Tuesday, December 9. 2008

We frequently get users on #pear (@efnet) who happen to run Plesk and need help setting up PEAR. Now running any config interface is a blog entry by itself and when I say Plesk, I should also mention confixx and cpanel. And while I have a strong dislike for all them, let me focus on Plesk for now.

This is not a copy'n'paste howto, so make sure you double-check all steps involved. With little knowledge, you should be able to to apply all instructions to any other control panel, all you need is SSH access to the server.

I installed PEAR, but it's not working

The number one reason here is that even though the include_path is setup correctly, you also run with open_basedir. For pseudo security reasons, Plesk typically restricts open_basedir to your "document root", for example:


(httpdocs is the document root.)

To verify the above, create a file called phpinfo.php in httpdocs, with the following content:

<?php phpinfo(); ?>

Look for include_path, it should contain something like /usr/share/php/pear and the open_basedir, which should contain this setting as well -- but doesn't.

Plesk internals

To understand the issue entirely, let's look at how plesk builds virtual hosts.

  • Each domain name has its own virtualhost include file, called http.include
  • The http.include is located in:


  • The settings (e.g. open_basedir) are set using php_admin_flag/_value which makes overriding through ini_set() or .htaccess unsuccessful.


  • The Annoying: Hack http.include and lose changes each time Plesk rebuilds your hosts (with each update). :-)
  • The high road: Find and edit Plesk's default template to add your PEAR installation to the open_basedir directive, or to turn off open_basedir completely. In this, case, what you're looking for is called "domain template" in Plesk, but because I'm not a Plesk user, I don't know much about it -- feel free to correct me.
  • The intermediate fix: Run your own local PEAR installation for each domain name.

Running your own PEAR installation

Running your own PEAR installation per domain name is a pretty solution, even if you managed to rid yourself of open_basedir, there are a couple strong points.


  • Dependencies, local and trackable -- vs. global (one installation per server).
  • This allows you to use different versions of packages when required by the software you use/run.
  • You can't really have enough PEAR. :-)


  • More management.

Note: you may need to adjust the paths, my examples are from a Plesk install on SuSE.


This is assuming that you already installed one copy of PEAR through your system's package manager -- for example, aptitude on Debian and Ubuntu, yast on SuSE, rpm on RedHat(e) and CentOS or ports/pkg on all BSDs.

Go into your domains document root and create a pear-directory:

cd /srv/www/vhosts/domain.com/httpdocs/
mkdir my_pear
pear config-create /srv/www/vhosts/domain.com/httpdocs/my_pear .pearrc

This should create a .pearrc file in the current directory, which holds the local installations' settings.

How do we use it?

Per User

If your website user has ssh, you move the .pearrc to the users home directory, which is the directory you end up in when you login via ssh.

mv /srv/www/vhosts/domain.com/httpdocs/.pearrc /srv/www/vhosts/domain.com/

To test if the .pearrc file is being read, issue the following:

pear config-show

The screen should show paths which contain /srv/www/vhosts/domain.com.

If that is not the case, try the following:

pear -c /srv/www/vhosts/domain.com/.pearrc config-show

If it's working, continue to install the packages you need. Since this is a new installation, the first should be PEAR itself:

pear install PEAR


pear -c /srv/www/vhosts/domain.com/.pearrc install PEAR


If you followed this guide, I hope it enabled you to have a different -- yet maintainable -- PEAR setup per directory. All you need is to use a different PEAR config file using the -c switch.

Further information is available on the PEAR manual.