History with time stamps!

When reviewing the history file in bash, it’s terrible not knowing when a command was executed.  Using the HISTTIMEFORMAT variable in a .bashrc file, the timestamp can be added to all commands.

# ~/.bashrc
HISTTIMEFORMAT=”%m/%d/%y %I:%M:%S %p ”
Sample output:

525  05/21/09 07:56:46 PM tail -f /var/log/messages  /var/log/secure

As you can see, the command is preceeded by the line number and a timestamp.

Software RAID and GRUB

When building out a system with a boot partition using software RAID, it is critical to install GRUB on both drives to that if one fails, the other can be used to boot the system.
1. Make sure that the RAID volume is synchronized (assuming /dev/md0 for /boot):

mdadm -D /dev/md0

2. Install grub on the first drive:

# grub
Probing devices to guess BIOS drives. This may take a long time.

grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd0)
Checking if “/boot/grub/stage1” exists… no
Checking if “/grub/stage1” exists… yes
Checking if “/grub/stage2” exists… yes
Checking if “/grub/e2fs_stage1_5” exists… yes
Running “embed /grub/e2fs_stage1_5 (hd0)”…  15 sectors are embedded.
Running “install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst”… succeeded

3. Install grub on the second drive:

grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
Checking if “/boot/grub/stage1” exists… no
Checking if “/grub/stage1” exists… yes
Checking if “/grub/stage2” exists… yes
Checking if “/grub/e2fs_stage1_5” exists… yes
Running “embed /grub/e2fs_stage1_5 (hd1)”…  15 sectors are embedded.
Running “install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst”… succeeded

grub> quit
That should allow booting from either drive without modification of grub.conf or /etc/fstab.

Persistent Debian Daemons

As a long time Redhat / Fedora user, starting daemons on system boot in Debian has been a mystery.  I recently took the time to search for the answer, rather than placing the start command in the rc.local file and it’s not that bad.  As long as the init script exists in /etc/init.d, run the following command to set it to persist:

update-rc.d <daemon>  defaults 

This handy tip was taken from the official debian docs at:


Replacing a MySQL Master Node

I recently had to build out a new MySQL node and replace an existing replication master.  Here is the basic procedure that I followed.

1. Build out the new server
2. Install MySQL.  Place the data directory on a logical volume with at least 10% free space in the volume  group (for snapshot backups).
3. Take a good backup of the database(s) from an existing slave.
4. Restore backup to newly built server/mysql instance.
5. Set master to current master.
6. Lock tables on master.
7. Cut over to new master when replication is caught up.

I won’t belabor the issue of building out a server or installing MySQL.  I used CentOS 5.3 and the Percona 5.0.77 binaries for this server.

Taking a Restorable Backup from an Existing Slave

In order to create a point in time restorable backup, it is necessary to stop all writes to the database.  In order to do this on the slave, I simply stopped replication.  This is easily done with the ‘stop slave;’ command.

mysql> stop slave;
Query OK, 0 rows affected (0.12 sec)

Also, issue a ‘show slave status\G’ and note the master log file name and position.  This will be used to setup replication on the new master which will allow it to sync with the current master.

To perform the backup, I used a combination of mysqlhotcopy and mysqldump.  In this case, I only had < 1MB of data in InnoDB tables and 40GB of data in MyISAM tables.  Mysqlhotcopy is used to backup the MyISAM data while mysqldump is used for the InnoDB data.  Note that unless you stop the slave this will not allow a perfect point in time backup as the InnoDB tables might be changed between the time the mysqlhotcopy finishes and the mysqldump finishes.

After the backup is complete, start the slave thread on the MySQL instance where the backup was taken.

mysql> start slave;
Query OK, 0 rows affected (0.10 sec)

Restore Backup to New Server

To restore the backup, I copied the MyISAM files to the data directory on the destination host and chown’ed them to be owned by the mysql user and group.  I then started the MySQL server instance and imported the mysqldump data using the  mysql command.

Set (new) Master to Current Master

First, verify that the replication-user has access to the current master from the new master.  Once this is verified, set the current master by issuing the following command (or similar) on the new master.  This will allow the new master to sync data with the current master.


You should be able to issue a ‘show slave status\G’ command and see that replication is behind and catching up.

Once replication is caught up, it is safe to cut over to the new master.  In order to do this, all writes to the current master must be stopped and the new master must be allowed to be completely synched as far as replication is concerned.

Old master node:

mysql> flush tables with read lock;

Once this occurs, set any existing slaves to the new master and stop using the old master.  Also, stop the slave on the new master, set ‘reset slave;’ to remove all “master” variables.

New master node:

mysql> stop slave; reset slave;

Current slave nodes (use show master status on new master to get file name and position):

MASTER_HOST='<IP of new master>’,