Skip to content

Hosted MySQL: Amazon RDS (and backups)

Among all the different technologies in our stack, we also use MySQL. While we still run MySQL (or Percona-Server) ourselves, we selected a managed solution to power parts of our production infrastructure: a Multi-AZ setup with Amazon's RDS.

AZ is Amazon-speak for "availability zone", essentially a datacenter. RDS stands for: Relational Database Service.

Judging from my experience with our own setups where EBS is in the mix, I have to say that Amazon does an outstanding job hiding these potential issues with RDS from us. Looking at the price tag of the setup can be intimidating at first, but as far as TCO is concerned, RDS is the complete package: managed from every which way and painless for the most part.

RDS in a nutshell

RDS is pretty awesome — it's basically a highly available MySQL setup with backups and optional goodness like read-slaves. RDS is one of the best services as far as Amazon Webservices are concerned: 90% of what anyone would need from RDS, Amazon allows you to do with a couple clicks. For tuning, it gets a little less shiny and maybe even messy, but even changing parameters is possible.

Another pretty important feature is growing and shrinking RDS. Change your storage and either apply the change right away or wait for your next maintenance window. It should be noted that these changes are rarely instant (or "right away"), which doesn't make it any less awesome. So even though for example resizing the storage is not an instant operation (of course), it still puts a whole new definition into the word elastic.

The package

A standard RDS setup gives you a managed database instance with a DNS hostname and database credentials to log in.

Access from instances is granted using database security groups, which work just like the regular security groups (on AWS). In non-AWS-language, this translates to firewall policies.

Pricing

As far as pricing is concerned, AWS is always a little tough to understand: the baseline is 0.88 USD per hour for a multi-az deployment which totals to 633.6 USD a month (large instance class). Since we opted for reservation (a 1,200 USD one time fee for a three (3) year term), we were able to drop that price to 0.56 USD per hour.

Aside from instance costs there are storage costs as well: 0.20 USD per GB (100 GB will cost you and me about 20 USD) and 0.10 USD per million I/O requests (aka the "i/o rate"). On our multi-az RDS we selected 100 GB for total storage initially but since we currently use only about 60 GB, we just end up paying about 12 USD per billing period.

While storage costs are somewhat easy to predict, the "i/o rate" is not. But it's also not a major factor. I'm unable to provide exact numbers currently because we have three RDS servers (1 multi-az deployment, 1 read-slave and another single-az deployment) and the numbers are aggregated on the billing statement but our total is 368,915,692 IOs which runs at roughly 36 USD per month.

Vendor lockin

Anyway — if RDS is awesome, what's the catch? A closed environment.

The primary advantage and disadvantage of RDS is that we don't get access to the server and our own backups.

Of course there are backups and we can use them to restore (or rollback) our RDS setup from within AWS. There are options using the AWS console and I believe using their API as well. But in the end there is no way to export this backup and load it into a non-RDS-setup. And add to that: replicating from or into RDS is not possible either. Which makes migrations and backups an unnecessary pain in the butt.

Aside from not getting access to our own backup, we also don't get access to the actual instances. Which makes sense for AWS, but it means we need to rely on in my opinion questionable metrics like Cloudwatch. Questionable because there is no way for the customer to verify the data. AWS uses their own metrics (and scale) and it's often not obvious to me how well Cloudwatch works even on regular AWS EC2 instance.

I've seen instances which became unavailable, but Cloudwatch is reporting A-OK (green). I'm not sure how beta Cloudwatch is, but we decided on Silverline (more on that in another blog post) for instance monitoring. Since Silverline requires a local client, it's unfortunately not an option for RDS.

What's pain?

Aside from the monitoring and backup quirks, one of the real pain points of Amazon RDS is that a lot of the collective MySQL knowledge is not available to us. The knowledge which is manifested in books, blogs, various monitoring solutions and outstanding tools like Percona's backup tools are not available to people who run Amazon RDS setups.

Whatever the (technical) reasons for all this may be, they pain me at least a little and should be discussed when Amazon RDS is evaluated to replace MySQL infrastructure.

MySQL and backups

I mentioned backups! First off, I hope ya'll are doing them! ;-)

For many, the preferred way to do MySQL-backups is mysqldump. It's pretty simple to use:

$ mysqldump -u root -pPASS -A > my_backup.sql

This command essentially dumps all (-A) databases from your MySQL server.

mysqldump is an OK solution as long as you take a few things in mind:

  • Create your backups during a period where there is the least activity — typically the night.
  • There will be a slight bump, but hope that your database is small enough so no one notices.

With a larger database or most databases with lots of read and write activity, this is almost impossible to achieve. For a snapshot to be consistent, table locks are used and that usually stalls access to any application which relies on your database.

Of course there is Percona's xtrabackup (which is outstanding), but with RDS, that is not an option either.

Read-slave to the rescue

Typically people will use a read-slave with MySQL to offload read queries from the master. I haven't done any tests on how far these typically lag behind with Amazon RDS, but I am going to use my RDS read-slave for something else: backup.

Adding a read-slave is easy:

  1. Log into the AWS Console and go to RDS.
  2. Select your RDS server and click 'add read replica' above

The operation will likely take a while. The timeframe depends on the type of instance and the amount of storage provisioned. In my case I selected a small instance and it assigned 100 GB of storage to match my RDS. Small instances boot notoriously long — the entire operation completed in a little over ten minutes.

On a side-note: read-replicas allow you to scale RDS beyond availability zones (AZ). But you should be aware that traffic across different AZ is billed to the customer.

Costs

A small instance costs roughly 76 USD/month (excluding storage, I/O rate and bandwidth), which by itself is not bad for a fully managed server which I basically setup with two or three clicks. Since we plan to do backup on a regular basis, we will buy a coupon to reserve the instance which cuts down costs tremendously and generally makes the AWS cloud very affordable.

Amazon RDS, quo vadis?

I mentioned a little vendor-lockin with the service and the little visibility from the outside.

In theory, this should not matter — however there are more than a few issues you should be aware of. I don't want to mention them to stomp on Amazon — RDS is still in beta after all. But you should be aware of them to get a complete picture.

Pretty questionable is the way some of these issues are handle: not at all or in private messages. AWS is not always at fault here since I imagine pretty often the customer forgets to update the ticket when the issue is only temporary because their focus shifts to other areas.

But one of the core problem with customer service problem all over AWS is that customers have to resort to posting on a forum with no guaranteed response or have to buy a support contract which includes answers like "we fixed it". The first response is usually that more details are needed (Maybe customer accounts on the forum are not linked to AWS accounts on the inside?) and off it goes into private mode.

My wish is that these situations across all AWS services are handled more transparent in the future so people see development and evolution of the service which means that a trust-worthy platform is be build.

Fin

I've been thinking about my final statement for a while. If anything right now, I would be more in favour of Amazon RDS.

Amazon RDS is an extremely interesting product — the beta-tag is even more impressive. It'll be interesting to see what it will offer once Amazon pronounces it stable.

As for the future of our RDS-setups: they are not gonna go away soon. One of our objectives for 2012 is stabilizing across all products and infrastructure underneath. I think this will include some sort of integration with our external monitoring system to at least try to make sense of things like Cloudwatch and to be able to correlate with other events from all over production.

Some thoughts on outtages

Cloud, everybody wants it, some actually use it. So what's my take away from AWS' recent outtage?

Background

So first off, we had two pieces of our infrastructure failing (three if we include our Multi-AV RDS) — both of which involve EBS.

Numero uno

One of those pieces in my immediate reach was a MySQL server, which we use to keep sessions. And to say the least about AWS and in their defense, the instance had run for almost 550 days and had never given us much or any reason to let us down.

In almost two years with AWS I did not magically lose a single instance. I had to reboot two or three once because the host had issues and Amazon sent us an email and asked us to make sure the instance survive a reboot, but that's about it.

Recovering the service, or at least launching a replacement in a different region would have been possible if not by coincidence we would have hit several limits on our AWS account (instances, IPs and EBS volumes), which apparently take multiple days to lift. We contacted AWS immediately and the autoresponder told us to email back if it was urgent, but I guess they had their hands full and apparently we are not high up the chain enough to express how urgent it really was.

I also tried to reach out to some of the AWS evangelists on Twitter which didn't work since they went silent almost all the way through this outtage.

All in all, it took roughly five hours to get the volume back and another 4-5 to recover the database. As far as I can tell, nothing was lost.

And in our defense — we were well aware of this SPOF and had already plans to move on a more redundant approach — I have another blog post in draft about evaluating alternatives (Membase).

Numero due

The second critical piece of infrastructure which failed for us is our hosted BigCouch cluster with Cloudant.

We managed to (manually) failover to their cluster in us-west1 later in the day and brought the service back up. We would have done this earlier, but AWS suggested it would be only a few hours which is why we wanted to avoid the hassle of having to sync up clusters later on.

Sidenote: Cloudant is still (day three of the downtime) trying to get all pieces back online. Kudos to everyone from Cloudant for their hard work and patience with us.

Lesson learned for myself: When things fail which are not within your reach, it's pretty hard to do anything and stay calm. A good thing is to keep everyone busy so our team tried to reach out to all customers (we average about 200,000 users per day) via Twitter and Facebook and it looks like we've tackled that well.

Un trio!

Well, I don't have much to say about Amazon RDS (hosted MySQL in the cloud). Except that it didn't live up to our expectations: it costs a lot of money, but we learned that apparently that doesn't buy us anything.

Looking at the CloudWatch statistics associated with our RDS setup (or EBS in general), I'm rather weary and don't even know if they can be trusted. In the end, I can't really say for how long RDS was down or failed to failover, but it must have been back before I got a handle on our own MySQL server.

The rest?

The rest of our infrastructure seems fine on AWS — most of the servers are stateless (Shared nothing, anyone?) and no EBS is involved.

And with the absense of EBS, there are no issues to begin with. Everything continued to work just as expected.

Design for failure.

This is not really a take away, but a no-brainer. It's also not limited to AWS or cloud computing in general.

You should design for failure when you build any type of applications.

In terms of cloud computing it's really not as bad as ZOMG MY INSTANCE IS GONE!!!11, but it certainly can happen. I've heard people claim that EC2 instances constantly disappear and with my background of almost two years on AWS, I know that's just not true.

Designing for failure should take the following into account:

Services become unavailable

These services don't have to run in the cloud, they can become unavailable on bare metal too. For example, a service could crash, there could be a network partition or maintenance involved. The bottom line: How does your application deal with it?

Services run degraded

For example, higher latency between the services, slower response time from disk — you name it.

The unexpected

Sometimes, everything is green, but your application still chokes.

While testing for the unexpected is of course impossible, validating what comes in is not.

Recovery

I'm not sure if a fire drill is necessary, but it helps to have a plan on how to troubleshoot issues to be able to recover from an outtage.

In our case, we log almost anything and everything to syslog and utilize loggly to centralize logs. loggly's nifty console provides us with great input about the state of our application at any time.

Add to the centralized logging, that we have a lot of monitoring using ganglia and munin in place. Monitoring is an ongoing project for us since it seems like once you start, you just can't stop. ;-)

And last but not least: We can launch a new configured EC2 instance with a couple mouse clicks using Scalarium.

I value all these things equally — without them, troubleshooting and recovery would be impossible or at least a royal PITA.

Don't buy hype

So to get to the bottom of this (still ongoing) event, I'm not particulary pissed that there was downtime. Of course I can live without it, but what I mean is: Things are bound to fail.

The truth is though, that Amazon's product description is not exactly honest, or at the very least provides everyone with a lot of room for interpretation. You're asking for how much interpretation? I'm sure you could put ten cloud experts into one room and come away with 15 different opinions.

For example the use of attributes to a service such as highly available may cause different expectations for different people.

Let me break it down for you: highly available in AWS speak, means, "it works most of the time".

What about an SLA?

Highly available is not an SLA with those infamous nine Erlang nines.

On paper a multi-az deployment of Amazon RDS gets pretty close to what generally people expect from highly available: MySQL master-master replication, backups — in multiple datacenters. As of today we all know: even these things can fail.

And speaking of SLAs: It looks like none of the services failing are covered by it: AWS' track record remains clean. This is because EBS is not explicitely named in it and by the way neither is RDS. Amazon's SLA — as far as EC2 is concerned — covers some but not all network outtages. Since I was able to access the instance the entire time none of this applies here.

Multi-Zone

On Twitter people are quick to suggest that everyone who complaines now should have had a setup in multiple availability zones setup.

Here's what I think about it:

  • I find it rather amusing that apparently everyone on bare metal runs in multiple datacenters.

  • When people suggest multi-zone (or multi-az in Amazon-speak), I think they really mean multi-region. Because a zone is effectively us-east-1a, us-east-1b, us-east-1c and us-east-1d. Since all datacenters (= availability zones) in the us-east1 region failed on 2011/04/21, your multi-zone setup would not have covered your butt. Even Amazon's multi-az RDS failed.

  • Little do people know, but the zone identifiers — e.g. us-east-1a, us-east-1b — are tied to customer accounts. So for example, Cloudant's version of us-east-1a may be my us-east-1c, it may or may not be the same. This is why in many cases AWS never calls out explicit zones in outtages. This also makes it somewhat hard to plan ahead in a single region.

  • AWS sells customers on the idea that an actual multi-az setup is plenty. I don't know too many companies who do multi-region (maybe SimpleGeo?). Not even NetFlix does multi-region, but I guess they managed to sail around this disaster because they don't use EBS.

  • In the end it shouldn't be necessary to do a multi-region setup (and deal with its caveats) since according to AWS the different zones inside a region are different physical locations (let's call them datacenters) to begin with. Correct me if I'm wrong, but the description says different physical location, this is not just another rack in the same building or another port on the core switch.

Communication

Which brings me to the most important point of my blong entry.

In a nutshell, when you build for the AWS platform, you're building for a blackbox. There are a couple papers and blog posts where people try to reverse engineer the platform and write about its behavior. The problem with these things is that most people are guessing, though often (of course depending on the person writing) it seems to be a very well educated guess.

Roman Stanek blogged about communication between AWS and its customers, so head on over, I pretty much agree with everything he has to say:

Fin

So what exactly is my take away? In terms of technical details and as far as redundancy is concerned: not so much.

Whatever you do to run redundant on AWS, applies to setups in your local colocation or POP as well. And in theory, AWS makes it easier though to leverage multiple datacenters (availability zones) and even achieve somewhat of a global footprint by distributing in different regions.

The real question for anyone to ask is, Is AWS fit to host anything which requires permanent storage? (I'm inclined to say no.)

That's all.