Thursday, May 21, 2009

The Second Law of Automated Testing

I was invited by Paul Moore and Paul Hodgetts to give a presentation at the Agile/XPSoCal monthly evening meeting, which happened last night in Irvine, at the Capital Group offices. The topic of my presentation was 'How to Get to "Done" - Agile and Automated Testing Techniques and Tools'. I think it went pretty well, there were 30+ people in attendance and I got a lot of questions at the end, which is always a good sign. Here are my slides in PDF format. I presented a lot of tools as live demos outside of the slides, but I hope that the points I made in the slides will still be useful to some people.

In particular, I want to present here what I claim to be...

The Second Law of Automated Testing

"If you ship versioned software, you need automated tests."

At the talk last night I was waiting to be asked about the first law of automated testing, but nobody ventured to ask that question ;-) (for the record, my answer would have been 'you need to buy me a beer to find that out').

But I strongly believe that if you have software that SHIPS and that is VERSIONED, then you need automated tests for it. Why? Because how would you know otherwise that version 1.4 didn't break things horribly compared to version 1.3? You either employ an army of testers to manually test each and every 1.3 feature that is present in 1.4, or you use a strong suite of automated regression tests that cover all major features of 1.3 and that show you right away if any were broken in 1.4. Your choice.

Notice that I also qualify the software as 'software that ships'. This implies that you hopefully use sound software engineering processes and techniques to build that software. I am not referring to toy projects, or 1-page Web sites for temporary events, or even academic projects that are never shipped widely to users. All these can probably survive with no automated tests.

If you think you have some software that ships and is versioned, but you found that you're doing very well with no automated tests, I'd like to hear about it, so please leave a comment.

Friday, May 15, 2009

MySQL fault-tolerance and disaster recovery techniques

Any non-trivial MySQL installation needs to be protected against failures, and especially so in a 'cloud' environment, where failure should be expected. I've had bad experiences with MySQL clustering (I tenderly refer to it as MySQL clusterf**k), so I'm going to talk about MySQL replication in this post.

The most common fault-tolerance scenario in a MySQL environment is to have a master database server and a pool of load-balanced slave database servers. Hopefully your application is configurable so it can write to the master DB and read from the slave DB pool. If it is not, you can still use this technique (with some limitations) by going through MySQL Proxy, as detailed in another blog post of mine.

There is plenty of documentation available on setting up MySQL replication. I will jot down here some notes on things I find myself doing over and over again, in a condensed format that hopefully will benefit others too.

Step 0 is to enable binary logging on the master database. That's all you need to do for a MySQL DB server to be able to function as a master. To achieve this, you can add lines like these in /etc/my.cnf and restart mysqld:
 
server-id = 1log-bin = /var/lib/mysql/mysql-bin

One other option you might want to set up is the binlog format. For recent MySQL versions, the default is STATEMENT. For some types of updates to the master, I found it is better to specify ROW as the binlog format (for an explanation of the differences between the 2 types, and for more info that you ever wanted about binary logging, see the official documentation):
binlog_format = ROW
You also need to create a MySQL user on the master DB and grant it REPLICATION SLAVE rights. You can use a statement like this:
GRANT REPLICATION SLAVE ON *.* TO 'replicant'@'IP_of_slave_DB' IDENTIFIED BY 'somepassword';


Setting up a MySQL slave when you can lock tables on the master

This is the recommended way of setting up a MySQL slave DB machine. It requires locking the tables for writes on the master DB, which is something you may or may not afford to do. Here are the steps you need to go through:

1) Connect to the master DB server and issue this command:

FLUSH TABLES WITH READ LOCK;

2) Note the binlog file name and position on the master by running this command:

SHOW MASTER STATUS;
| File| Position | Binlog_Do_DB | Binlog_Ignore_DB
| mysql-bin.000004 | 87547369 || |
1 row in set (0.01 sec)


3) Leave the current mysql session open so that the tables are still locked on the master, and in a different session take a database dump of the mysql database and of the application database on the master. You can use a command line such as:


mysqldump -u root -p$MY_ROOT_PW --database mysql \
--lock-all-tables | /bin/gzip > mysql.sql.gz

mysqldump -u root -p$MY_ROOT_PW --database $MYDB \
--lock-all-tables | /bin/gzip > $MYDB.sql.gz

4) Once the dump is done (a process which on a very large database can take hours), go ahead and unlock the tables in the first MySQL session:

UNLOCK TABLES;

5) Now you're ready to set up a MySQL slave database. It's a good idea to set up binary logging on all your slaves, so that if your master DB fails, any slave can be promoted to a master. If you do turn binary logging on, do NOT also enable log-slave-updates (because if you do, and if you promote a slave to a master, then the other slaves might receive some updates twice -- complete explanation available here).

The DB machine you want to set up as a slave should have lines similar to these in its /etc/my.cnf file (server-id needs to be different from the master ID and any other slave IDs that talk to the same master):

server-id = 2
log-bin = /var/lib/mysql/mysql-binbinlog_format = ROW
6) On the machine you want to set up as a slave, load the mysql dumps of the mysql DB and of your application database (the ones you took in step 3). Note that you may need to create the application database before you can load the application DB dump into it.

7) On the slave, fire up a mysql prompt and use the 'CHANGE MASTER TO' command to specify the master DB, the binglog file and the binlog position (you need to use the values from step 2):

STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO
MASTER_HOST='master_database_server_name',
MASTER_USER='replicant',
MASTER_PASSWORD='somepassword',
MASTER_LOG_FILE='mysql-bin.000004',
MASTER_LOG_POS=87547369;
START SLAVE;

8) Run the 'SHOW SLAVE STATUS \G' command on the newly created slave DB and make sure that the values for both Slave_IO_Running and Slave_SQL_Running show as YES, and that Seconds_Behind_Master is 0 (it can take a while initially for this value to converge to 0, but it should do so). Here is an example of the output of this command:

*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: my_master_host
Master_User: replicant
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 157767054
Relay_Log_File: crt-relay-bin.000012
Relay_Log_Pos: 112340434
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table: MYDB.tmp\_%
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 112340289
Relay_Log_Space: 112340630
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: Yes
Master_SSL_CA_File: /etc/pki/tls/cert.pem
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)

Note that I am explicitly excluding from replication tables that start with tmp, which in my case are temporary tables created by certain operations on the master DB which are not needed on the slaves. To do this, I added this line to /etc/my.cnf on the slaves (all replication filtering is done at the slave level):
replicate-wild-ignore-table = MYDB.tmp\_%

Promoting a slave database to master

Let's say disaster strikes and your master DB goes down. At this point, if you have replication set up as above, you can easily turn one of the slave DB machines into a slave, and reconfigure the other slaves to have this newly promoted machine as their master. The official documentation for this scenario is here and it's very good. Let's slave you have master M01 and slaves S01, S02 and S03. Master 01 dies. You want to promote slave S01 to master, and set up S02 and S03 to replicate from S01.

On S01, run these commands at the MySQL prompt:
STOP SLAVE;
RESET MASTER;
CHANGE MASTER TO MASTER_HOST='';
On S02 and S03, run these commands at the MySQL prompt:
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO MASTER_HOST='S01';
START SLAVE;

Now if you run 'SHOW SLAVE STATUS\G' on the slaves, you should see no errors, and you should also see the master DB hostname shown as 'S01' instead of 'M01'.

While we're on the subject of switching the master DB, it can happen that the slave DBs will get some udpates from the newly promoted master that will conflict with their current view of the database. For example, they can receive from the master a duplicate insert, or a delete on a row that doesn't exist in their database. In these cases, to bring the slave to a sane state, you can issue commands like this one, where N is 1 or 2 (see full explanation here):

STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = N;
START SLAVE;

You can try running the skip command repeatedly until the slave goes back to a successful replication state.

Setting up a slave database from a hot backup of the master

Let's say you have your master database up and running, and you want to set up a new slave without locking the tables for writes on the master. In this case, you can use a product such as InnoDB Hot Backup, which is very much worth its $500/year/host price. What's more, they provide a 30-day free evaluation binary tied to the host name of your DB machine, which is nice if you need something in a critical situation, or if you want to test it before committing to pay.

Here's a procedure for setting up a new slave DB from a hot backup on the master. The InnoDB Hot Backup documentation is very good, and what follows is a subset I used from that documentation.

1) On the master, create two mini configuration files which are tiny subsets of my.cnf. Call one for example my.cnf.source and the other one my.cnf.destination. The source file needs to contain lines similar to these referring to the location of your live MySQL installation:

# cat /etc/my.cnf.source
[mysqld]
datadir = /var/lib/mysql/
innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
set-variable = innodb_log_files_in_group=2
set-variable = innodb_log_file_size=512M
The destination file needs to contain similar lines, but pointing to a directory where the backup files will be created (that directory needs to be empty). For example:

# cat /etc/my.cnf.destination
[mysqld]
datadir = /var/hot-backups
innodb_data_home_dir = /var/hot-backups
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/hot-backups
set-variable = innodb_log_files_in_group=2
set-variable = innodb_log_file_size=512M

2) On the master, run the ibbackup binary and point it to the 2 configuration files:

# /path/to/ibbackup /etc/my.cnf.source /etc/my.cnf.destination 
 
This step can be quite lengthy, depending on the size of your database, but note that you don't need to lock any tables on the master during this time. Upon the completion of this step, you should see an InnoDB data file (its name is the one you specified in the innodb_data_file_path variable in the config files), and an InnoDB transaction log called ibbackup_logfile. Note that this is not identical to the InnoDB logs on the master. To create those logs, you need to go to the next step.

3) On the master, apply the transaction logs created by the hot backup process by running this command:

# /path/to/ibbackup --apply-log /etc/my.cnf.destination

When this is done (again it can take a while), you should see N log files called ib_logfile1, ib_logfile2, ..., ib_logfileN in the destination directory -- where N is the value of the variable innodb_log_files_in_group that you set in the configuration file.

4) On the master, do a tar.gz of all directories in the MySQL datadir which contain MyISAM tables, or .frm tables from InnoDB tables (the main one being of course the mysql directory, containing the MyISAM tables for the mysql database -- assuming of course you've kept the default of MyISAM for the mysql DB).

5) Now you're ready to transfer the data file created in step 2, the log files created in step 3, and the archives created in step 4 to a new machine running MySQL, which you intend to set up as a slave DB. Simply scp the files over. On the target machine, stop mysql, move /var/lib/mysql (or wherever your datadir is) to /var/lib/mysql.bak, create a brand new /var/lib/mysql directory and drop all the files you transferred into that directory (un-tar-ing the tar.gz files appropriately). Also run 'chmod -R mysql.mysql /var/lib/mysql'. Finally, make sure the my.cnf file on the slave has binlog enabled (in case you ever need to promote this slave to a master).

6) Restart the mysqld process on the target machine, and make note of the binlog file and position, which are captured in the mysql log file. You should see a line similar to this:
InnoDB: Last MySQL binlog file position 0 6199825, file name /var/lib/mysql/mysql-bin.000008
Now go to the mysql prompt on the target machine and run:

STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO
MASTER_HOST='master_database_server_name',
MASTER_USER='replicant',
MASTER_PASSWORD='somepassword',
MASTER_LOG_FILE='mysql-bin.000008',
MASTER_LOG_POS=6199825;
START SLAVE;

At this point, 'SHOW SLAVE STATUS\G' should show no errors, and the new slave should be replicating correctly from the master DB server. It may take a while for the slave to catch up, depending on when you took the hot backup on the master.

Before I finish this post, one word of advice when it comes to mounting EBS volumes in EC2: do not mount /var by itself on an EBS, because if for some reason the EBS becomes unavailable or fails, you won't be able to ssh back into your instance. Why is that? Because sshd (at least in CentOS) needs /var/empty to be available for privilege separation purposes.

If you want to take advantage of an EBS on an EC2 instance functioning as a MySQL database server, it's better to either mount /var/lib/mysql on an EBS, or specify a non-default data directory for MySQL, which you then mount from an EBS.

UPDATE: EC2 backup strategies

An anonymous comment reminded me that I need to also discuss backups. Doh. In an EC2 environment, it's very easy to backup up a whole EBS by means of a snapshot.

Of course, if you do a snapshot with no other backups, the database files will be 'live', but I managed in one case to
1) detach an EBS containing /var/lib/mysql from an instance that was failing, and
2) attach the EBS to another instance and mount it in /var/lib/mysql

I then restarted mysqld on the new instance and everything worked as expected. This is NOT the recommended strategy however. What is recommended is to do a database dump (either a hot backup if you can afford it, or a simple mysqldump) to an EBS, and snapshot the EBS periodically.

Alternatively, you can use various S3 utilities to capture the backups directly to S3. The EBS snapshot solution is better IMO because you can quickly recreate an EBS volume from a snapshot, then mount it to either the original instance, or to a new instance.

However, EBS volumes DO sometimes fail, so another thing to think about is to run your EC2 instances (especially your slave DBs) in different availability zones. We had an issue with 2 of our database servers failing at the same time in zone US-East-1a due to EBS issues, and the thing that saved us is that we had slaves in other availability zones that weren't affected.

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...