Do-It-Yourself Backup System Using Rsync
Written by Kevin Korb
as a presentation for GOLUG
This document is available at http://www.sanitarium.net/golug/rsync_backups_2010.html
- What is rsync?
Rsync is a program for synchronizing two directory trees across different file systems even if they are on different computers. It can run its host to host communications over ssh to keep things secure and to provide key based authentication. If a file is already present in the target and is the same as on the source the file will not be transmitted. If the file on the target is different than the one on the source then only the parts of it that are different are transferred. These features greatly increase the performance of rsync over a network.
- What are hard links?
Hard links are similar to symlinks. They are normally created using the ln command but without the -s switch. A hard link is when two file entries point to the same inode and disk blocks. Unlike symlinks there isn't a file and a pointer to the file but rather two links to the same file. If you delete either entry the other will remain and will still contain the data. Here is an example of both:
------------- Symbolic Link Demo -------
% echo foo > x
% ln -s x y
% ls -li ?
38062 -rw-r--r-- 1 kmk users 4 Jul 25 14:28 x
38066 lrwxrwxrwx 1 kmk users 1 Jul 25 14:28 y -> x
-- As you can see, y is only a pointer to x.
% grep . ?
-- They contain the same data.
% rm x
% ls -li ?
38066 lrwxrwxrwx 1 kmk users 1 Jul 25 14:28 y -> x
% grep . ?
grep: y: No such file or directory
-- Now that x is gone y is simply broken.
------------ Hard Link Demo ------------
% echo foo > x
% ln x y
% ls -li ?
38062 -rw-r--r-- 2 kmk users 4 Jul 25 14:28 x
38062 -rw-r--r-- 2 kmk users 4 Jul 25 14:28 y
-- They are the same file occupying the same disk space.
% grep . ?
-- They contain the same data.
% rm x
% ls -li ?
38062 -rw-r--r-- 1 kmk users 4 Jul 25 14:28 y
% grep . ?
-- Now y is simply an ordinary file.
---------- Breaking a Hard Link ----------
% echo foo > x
% ln x y
% ls -li ?
38062 -rw-r--r-- 2 kmk users 4 Jul 25 14:34 x
38062 -rw-r--r-- 2 kmk users 4 Jul 25 14:34 y
% grep . ?
% rm y ; echo bar > y
% ls -li ?
38062 -rw-r--r-- 1 kmk users 4 Jul 25 14:34 x
38066 -rw-r--r-- 1 kmk users 4 Jul 25 14:34 y
% grep . ?
Why backup with rsync instead of something else?
Why/When wouldn't you want to use rsync for backups?
- Disk based: Rsync is a disk based backup system. It doesn't use tapes which are too slow to backup (and more importantly restore) modern systems with large hard drives. Also, disk based backup solutions are much cheaper than equivalently sized tape backup systems.
- Fast: Rsync only backs up what has changed since the last backup. It NEVER has to repeat the full backup unlike most other systems that have monthly/weekly/daily differential configurations.
- Less work for the backup client: Most of the work in rsync backups including the rotation process is done on the backup server which is usually dedicated to doing backups. This means that the client system being backed up is not hit with as much load as with some other backup programs. The load can also be tailored to your particular needs through several rsync options and backup system design decisions.
- Fastest restores possible: If you just need to restore a single file or set of files it is as simple as a cp or scp command. Restoring an entire file system is just a reverse of the backup procedure. Restoring an entire system is a bit long but is less work than backup systems that require you to reinstall your OS first and about the same as other manual backup systems like dump or tar.
- Only one restore needed: Even though each backup is an incremental they are all accessible as full backups. This means you only restore the backup you want instead of restoring a full and an incremental or a monthly followed by a weekly followed by a daily.
- Cross Platform: Rsync can backup and recover anything that can run rsync. I have used it to backup Linux, Windows, DOS, OpenBSD, Solaris, and even ancient SunOS 4 systems. The only limitation is that the file system that the backups are stored on must support all of the file metadata that the file systems containing files to be backed up supports. In other words if you were to use a vfat file system for your backups you would not be able to preserve file ownership when backing up an ext3 file system. If this is a problem for you try looking into rdiff-backup.
- Cheap: It doesn't seem like it would be cheap to have enough disk space for 2 copies of everything and then some but it is. With tape drives you have to choose between a cheap drive with expensive tapes or an expensive drive with cheap tapes. In a hard drive based system you just buy cheap hard drives and use RAID to tie them together. My current backup server uses two 500GB IDE drives in a software RAID-0 configuration for a total of 1TB for about $100 which is about 1/6th what I paid for the DDS3 tape drive that I used to use and that doesn't even include the tapes that cost about $10/12GB.
- Internet: Since rsync can run over ssh and only transfers what has changed it is perfect for backing up things across the internet. This is perfect for backing up and updating a web site at a web hosting company or even a co-located server. Internet based backup systems are also becoming more and more popular. Rsync is the perfect tool to backup to such services over the internet.
- Do-it-yourself: There are FOSS backup packages out now that use rsync as their back end but the nice thing here is that you are using standard command line tools (rsync, ssh, rm) so you can engineer your own backup system that will do EXACTLY what you want and you don't need a special tool to restore.
Why not just use RAID / Is this like using RAID-1? / Is this like DRBD?
I don't think I can ever say this enough times.... RAID is NOT a backup system! RAID (other than level 0) does a wonderful job of protecting your data from disk failures. However, it provides absolutely NO protection against file corruption, files destroyed by a virus or a hacker, or the "oops, I deleted the wrong file" problem which most of us have encountered. There is a time and a place for RAID and RAID is not always needed however data should ALWAYS be backed up regardless of what media it is stored on or how redundant that media may be. Networked mirroing solutions like DRBD have the same drawbacks as RAID as they are a simple real-time mirror. My general rule of thumb is that if you can't restore your data to the way it was last Monday or last night using a storage device other than the one the data was on last Monday then you don't have a backup system.
Do I need to backup the OS or just the data?
In my opinion yes, you need to backup the OS as well as the data. Many people feel that the OS is easily recreated by doing a re-install plus loading a list of applications that was saved during the backup run. While this is true in theory it isn't so easy in practice. If you ever have the catastrophic loss of a server you will find out very quickly that every minute counts. If you have a backup of the OS and an established and practiced procedure for restoring it the recovery will go very quickly and it will probably work the first time. If your recovery procedure includes "install the OS" and "install all the applications" expect to add a full day of listening to users complain while you do those steps. Also, in terms of gigabytes, the OS is usually tiny compared to the data it supports. The extra disk space required to backup the OS will probably not even make a difference in the choice of how big to make the backup system. With the typical ratio of OS vs data it is just silly to not backup the OS. Finally, the worst case scenario is that your data-only backup system misses some configuration or data file that was assumed to be part of the OS but had been modified. If you aren't backing it up you will simply lose it.
Why all this talk of a backup server? Why not just use an external hard drive?
While it is completely possible to do the backups this way using these procedures (and I have done it this way myself) there are a couple of drawbacks...
- Databases: Rsync is a file level backup so it is not suitable for databases. If your primary data is databases then you should look somewhere else. If you have databases but they are not your primary data then there are procedures below to integrate database backups into the rsync backups.
- Windows: If you plan to backup windows boxes then rsync probably isn't for you. It is possible to backup Windows boxes with rsync but the system recovery process is UGLY and if you want a complete backup of the OS you will have to boot the computer into Linux or use Shadow Copy to be able to read some of the files. I have yet to find a simple comprehensive procedure for restoring a complete Windows system based on a copy of all files from C:. If someone has such a procedure I would love to see it.
- Compression: Since rsync doesn't put the files into any kind of archive there is no compression at all. In most cases it is still more cost effective to store uncompressed data on a hard drive than it is to store compressed data on a tape or some other media but this might not be true for everyone. Also, most modern file formats are already compressed so in many cases the compression wouldn't help anyways.
- Commercial support: Like most of the stuff I talk about there is no real commercial support for this. If you want a backup software vendor that you can call and beg for help from then go buy some big commercial backup system but expect to pay a ton of money for something that isn't anywhere near as flexible as rsync.
- Security: Since rsync runs over ssh you would normally set it up so that root on your backup server can ssh into all of your other machines as root without a password. This means that the security of your backup server becomes very important as anyone who roots it can root any other server with one command. There are ways that you could design around this or you could simply require the person running the backup to type in the root passwords as it goes but those solutions all over-complicate things. Giving your backup server all of the keys isn't really as bad as it sounds though when you consider that in any other backup system the backup server would still have some kind of root access to the other servers as well as a complete copy of them that a hacker could use to find vulnerabilities. Note that it is possible to restrict the ssh key used by the backups to only work from the backup server using the from= parameter in the authorized_keys file.
- Do-it-yourself: Again, this is a do-it-yourself system. You have to decide how you want your backups to work and how you want them organized. If you don't want to write/modify shell scripts then look for something else or look at the available backup systems that use rsync as their back end. Examples of less do-it-yourself oriented backup systems include rsnapshot, dirvish, and hopefully the one I created based on rsync.
What about a Network Attached Storage (NAS) device instead of a server?
This depends on the quality of the NAS device. Rsync is designed to reduce network IO at the expense of disk IO and CPU cycles. It can do this because normally you have two instances of rsync running on two systems with a network protocol in between. Both instances of rsync have local disk access to one side of the transfer so they can do calculations to reduce the amount of data that needs to be transmitted across the network. If rsync is only running at one end of the network connection then disk IO is really network IO so those features are automatically disabled. Some of the higher end NAS devices support rsync directly and can be treated exactly like a standard rsync backup server. Unfortunately not all NAS devices are this smart. Some will only provide access via network mounts and some even only support CIFS instead of NFS. If yours doesn't at least support NFS I would not suggest using it for rsync backups. If your best choice is NFS then note that --whole-file will be forced by rsync to reduce the performance impact of NFS.
How do you do off-site/off-line backups with rsync?
The best way to do an off-site or off-line backup is to do the rsync backup like normal and then backup the backup to tape or whatever media you want to use for your off-line/off-site backups. This gives you all the speed advantages of rsync during the actual backups and restores while allowing you to do the slower tape backups during the day when the backup server would otherwise be idle. Note that I do not recommend using removable hard drives for off-site rsync backups. Hard drives have very fragile moving parts and if you are constantly transporting them around they will not last long and will probably fail when you need them most as that is when they will be transported.
How do you handle databases?
Databases can't just be backed up like files. This is because database engines are constantly making changes to the database files at the block level. If you backed them up with a file based tool like rsync the backup would be inconsistent and possibly even unusable. The best way to backup most databases is to take an LVM snapshot of the database then rsync backup the snapshot of the database. This allows you to have all the advantages of an rsync backup with as little impact to the running database as possible. If you can't use LVM snapshots then your next best bet is to use the database specific tools (mysqldump, pg_dump, etc) to dump the database contents to files that can be backed up. If all else fails you can lock or shutdown the database engine so the files are not changing during the backup but this will be a huge impact outage.
How much space does it take to do rsync backups while keeping old copies?
This completely depends on how much change there is between each backup and how many backups you retain. I have seen it as low as 5% and as high as 40% but it is completely dependant on your data and your retention policy.
Since this is a do-it-yourself system this is totally up to you to design. I have my backup storage mounted under /backup and put all of my rsync backups under /backup/rsync. Within that directory I make a directory for each host that gets backed up. Then for each backup of each file system I change '/' to '_' in the mount point name and time stamp the file system so my backup of /home/asylum done at 17:47 on 2005-07-25 would be stored in /backup/rsync/asylum/_home_asylum.2005-07-25.17-47-42. When the backup is done I would create a symlink from that directory to /backup/rsync/asylum/_home_asylum.current to make it easier to find especially from scripts. As the backup is running I replace the date and time in the filename with "incomplete" so that if a backup is aborted or fails it does not appear to be a complete one and does not count against the number of old backups to keep.
Rsync does the incremental backups using "hard links" and the --link-dest parameter. However, it has no mechanism for purging old backups when they reach a predefined age. The purging can be done with a simple rm -rf of the oldest backup(s) as needed. However, the deletion of a large directory tree can take a significant amount of time and probably isn't something you want to waste time on during your backup window. Therefore, instead of doing an rm -rf I suggest just doing an mv to a deletion pool directory within the same file system. This will allow you to easily do all of the deletions later after the backups have finished.
- Security: One of the reasons we have backups is because of the possibility of malicious activity (hackers, worms, trojans, etc). If your backup device is plugged into the computer being backed up then any malicious users or software that can destroy your data can also destroy your backups. Keeping your backups on a separate isolated server protects them from this possibility. Note that this is also why I prefer to pull backups from a script running on the backup server rather than pushing backups from a script running on the backup clients.
- Performance: Rsync's ability to transfer only the parts of a file that have changed does not work on local transfers. This is because the feature would actually be counter-productive on a local transfer. Rsync would have to read and hash both versions of the file then write out the new version of the file instead of simply reading from the source and writing to the target. Also, most external hard drives are USB which is a pretty slow interface. Note that this is also true if you use a network mount (such as NFS, Samba, or CIFS) to access the remote data instead of a network transport like ssh or rsyncd. Read the rsync man page section on --whole-file for more information.
Here is how the organization with the hard links looks:
You can determine the current backup with:
# readlink _home_asylum.current
Here is an example of 10 backups of my home directory:
# du -shc _home_asylum.2*
# foreach f (_home_asylum.2*)
foreach? du -sh $f
Note that each backup individually is complete but when taken together there is only a small increase in disk usage. This concept is the key of rsync incremental backups.
To purge old backups simply count how many there are and if there are too many just move the oldest one to your deletion pool and repeat the procedure until there is no longer too many old backups.
Actually backing up
Now we get to actually look at rsync. When you run rsync you will tell it to backup the live file system into a new empty directory and to look to the previous backup for files that have already been backed up. Whenever rsync finds a new file it will copy over that file. Whenever it finds a modified file it will copy over the differences making a new file in the new backup directory but leaving the old version of the file as it was in the old backup directory. When rsync finds a file that has not changed since the last backup it will simply be hard linked into the new backup directory requiring almost no additional disk space. There is a wide variety of options that can be used with rsync to tailor it to your specific needs but here is what my system uses by default:
# rsync --archive --one-file-system --hard-links \
--human-readable --inplace --numeric-ids --delete \
--delete-excluded --exclude-from=excludes.txt \
I also add --verbose --progress --itemize-changes when I am watching the backup run instead of using it from a cron job.
Now I will explain the components of that rather long command...
Recovering files from backups
Because rsync doesn't put the backed up files into any kind of archive this is as simple as copying a file. Just find the file you need on the backup server and copy it to where you need it to be. If you are restoring it to another server just use rsync or scp to get it there. Here are 2 examples of files that can be restored from my home directory:
- rsync: Duh, the rsync command ;)
- --archive: This causes rsync to backup (they call it "preserve") things like file permissions, ownerships, and timestamps.
- --one-file-system: This causes rsync to NOT recurse into other file systems. If you use this like I do then you must backup each file system (mount point) one at a time. The alternative is to simply backup / and exclude things you don't want to backup (like /proc, /sys, /tmp, and any network or removable media mounts)
- --hard-links: This causes rsync to maintain hard links that are on the server being backed up. This has nothing to do with the hard links used during the rotation.
- --human-readable: This tells rsync to output numbers of bytes with K, M, G, or T suffixes instead of just long strings of digits.
- --inplace: This tells rsync to update files on the target at the block level instead of building a temporary replacement file. It is a significant performance improvement however it should not be used for things other than backups or if your version of rsync is old enough that --inplace is incompatible with --link-dest.
- --numeric-ids: This tells rsync to not attempt to translate UID <> userid or GID <> groupid. This is very important when doing backups and restores. If you are doing a restore from a live cd such as SystemRescueCD or Knoppix your file ownerships will be completely screwed up if you leave this out.
- --delete: This tells rsync to delete files that are no longer on the server from the backup. This is less important when using --link-dest because you should be backing up to an empty directory so there would be nothing to delete however I include it because of the possibility that the *.incomplete directory I am backing up to is actually left over from a previous failed run and may have things to delete.
- --delete-excluded: This tells rsync that it can delete stuff from a previous backup that is now within the excluded list.
- --exclude-from=excludes.txt: This is a plain text file with a list of paths that I do not want backed up. The format of the file is simply one path per line. I tend to add things that will always be changing but are unimportant such as unimportant log and temp files. If you have a ~/.gvfs entry you should add it too as it will cause a non-fatal error.
- --link-dest=/backup/rsync/asylum/_home_asylum.2005-07-25.15-32-42: This is the most recent complete backup that was current when we started. We are telling rsync to link to this backup for any files that have not changed.
- asylum:: This is the host name that rsync will ssh to.
- /home/asylum/: This is the path on the server that is to be backed up. Note that the trailing slash IS significant.
- /backup/rsync/asylum/_home_asylum.incomplete/: This is the empty directory we are going to backup to. It should be created with mkdir -p first. If the directory exists from a previous failed or aborted backup it will simply be completed. This trailing slash is not significant but I prefer to have it.
- --verbose: This causes rsync to list each file that it touches.
- --progress: This adds to the verbosity and tells rsync to print out a %completion and transfer speed while transferring each file.
- --itemize-changes: This adds to the file list a string of characters that explains why rsync believes each file needs to be touched. See the man page for the explanation of the characters.
# ls -li _home_asylum.2*/kmk/bin/encode
3605946 5 kmk users 2223 Jul 2 11:34 _home_asylum.2005-07-05.11-05-34/kmk/bin/encode
3605946 5 kmk users 2223 Jul 2 11:34 _home_asylum.2005-07-07.13-43-22/kmk/bin/encode
3605946 5 kmk users 2223 Jul 2 11:34 _home_asylum.2005-07-07.17-22-09/kmk/bin/encode
3605946 5 kmk users 2223 Jul 2 11:34 _home_asylum.2005-07-13.11-14-32/kmk/bin/encode
3605946 5 kmk users 2223 Jul 2 11:34 _home_asylum.2005-07-18.16-32-54/kmk/bin/encode
4853134 1 kmk users 4012 Jul 21 19:31 _home_asylum.2005-07-25.15-32-42/kmk/bin/encode
# ls -li _home_asylum.2*/kmk/bin/mp3db
4074469 1 kmk users 29598 Jun 19 16:01 _home_asylum.2005-06-21.15-29-25/kmk/bin/mp3db
4082467 1 kmk users 29943 Jun 22 19:10 _home_asylum.2005-06-22.20-12-01/kmk/bin/mp3db
4124342 1 kmk users 30570 Jun 30 17:22 _home_asylum.2005-06-30.18-36-21/kmk/bin/mp3db
2617551 1 kmk users 30701 Jul 1 12:17 _home_asylum.2005-07-01.12-15-05/kmk/bin/mp3db
3605948 1 kmk users 35604 Jul 1 16:50 _home_asylum.2005-07-05.11-05-34/kmk/bin/mp3db
4411207 2 kmk users 35668 Jul 6 11:06 _home_asylum.2005-07-07.13-43-22/kmk/bin/mp3db
4411207 2 kmk users 35668 Jul 6 11:06 _home_asylum.2005-07-07.17-22-09/kmk/bin/mp3db
4523360 1 kmk users 37041 Jul 9 17:28 _home_asylum.2005-07-13.11-14-32/kmk/bin/mp3db
4675812 1 kmk users 37201 Jul 18 09:50 _home_asylum.2005-07-18.16-32-54/kmk/bin/mp3db
4853138 1 kmk users 37200 Jul 19 16:46 _home_asylum.2005-07-25.15-32-42/kmk/bin/mp3db
As you can see my encode script has been fairly constant while my mp3db script has changed almost every time I have run a backup. I can choose to restore whichever version I want as they are all just plain files.
Recovering entire file systems from backups
This is a simple reverse of the backup procedure. Just format the new file system and rsync the files back to it and make sure you use the same rsync options especially --archive and --numeric-ids.
Recovering entire systems from backups
This is where things get a little ugly. Of course this is for times that are already ugly because you probably just lost your boot drive and have a brand new one installed that is completely blank. This procedure varies a bit depending on what OS you are restoring but here is the general idea:
- Boot from some media that gives you an OS, networking, rsync, and ssh. SystemRescueCD, Knoppix, or most other Live distribution can do the job for Linux systems. In the case of OpenBSD I boot their install disc and then use ftp to transfer a tarball the rsync backup instead of using rsync. The same thing will work in Solaris although it is usually easier to NFS mount the backup repository.
- Partition the new drive with fdisk or whatever you usually use. If you follow my advice in the advanced section you will have an .sfdisk file and you can duplicate the original partition table with 'sfdisk /dev/whatever < file.sfdisk'.
- Format the new partitions. Linux choices are mke2fs, mkfs.ext4, mkfs.xfs, and mkswap. For most other operating systems it is simply newfs.
- Mount up the new partitions in a convenient location with something like:
# mkdir /s
# mount -vt [fstype] /dev/[root partition] /s
# mkdir /s/usr /s/var /s/proc /s/dev /s/tmp
# chmod 1777 /s/tmp
# mount -vt [fstype] /dev/[var partition] /s/var
# mount -vt [fstype] /dev/[usr partition] /s/usr
- Now run your file system level restores just like you would if you weren't recovering the entire system. You will need to restore each file system that was on the old boot disk.
- If you have made any changes such as device names, mount points, or partition layouts you should now update /s/etc/fstab and /s/boot/grub/menu.lst.
- Fix up /dev if needed
# cp -av /dev/console /dev/null /s/dev/
- Now you have to make the disk bootable again. This totally varies by operating system and boot loader...
For Linux systems using grub:
# grub-install --root-directory=/s /dev/sda
Or, if that doesn't work:
root (hd0,0) # or whatever partition matches your boot disk
For Linux systems using lilo (why are you still using lilo?):
# mount -vo bind /dev /s/dev
# mount -vo bind /proc /s/proc
# chroot /s /bin/bash
# lilo -v
For OpenBSD systems:
# cd /s/usr/mdec
# ./installboot /s/boot ./biosboot /dev/rwd0c (or /dev/rsd0c if using SCSI)
For Solaris systems:
# installboot /s/usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
- System Cloning: Once you have an rsync backup system and restore procedure in place it is easy to use your restore procedure as a method of cloning existing OS installs onto new systems. This has the added benefit of forcing you to practice your restore procedure so it will be well known and tested when the day comes that you need to restore something.
- Format of backup repository: Assuming you are using a Linux box as your backup server you have multiple choices for the file system type that you want to format the backup drive with. I generally use ext4 because it is the fastest well established file system available currently and it does a good job as long as there aren't too many files for fsck to handle in a reasonable amount of time. However, XFS is also a good choice because it is better at dealing with large files and it is much better at doing the delete portion of the backup rotation. XFS also eliminates the need for the periodic off-line fsck which may make it your only choice if you have many millions of files to deal with. You may want to play with these 2 choices a bit before you make your final decision. I do not recommend using JFS as it has horrible performance or reiserfs as it has horrible reliability.
- ZFS: If you have many millions of files to deal with you may discover that this system simply takes too long to delete old backups and if you ever need an fsck you may even be down for days waiting for it to finish. ZFS on OpenSolaris is the answer to your prayers. ZFS can handle multiple LVM-like snapshots. The benefit here is that you can run rsync backups without the --link-dest parameter and simply overwrite the previous backup each run. Then you use the ZFS snapshots to retain the old backups. Each old backup becomes a snapshot mount. The snapshots are created and deleted in less than a second removing the need for the long rm operations to purge old backups and allowing rsync to just sync files without bothering to create hard links. Hopefully soon btrfs will give us this capability in Linux but until then ZFS on OpenSolaris is the thing that completes large scale rsync backups.
- RAIDed backup repository: This is a somewhat interesting topic. There are many opinions out there about whether or not a backup repository should be made redundant using RAID. For many people (including me for personal use) the backups are an additional redundant copy of something that is already stored on a redundant RAID array and therefore the backups do not need to be redundant. That is why my personal backups are on a RAID-0 volume for pure speed and large capacity. For others (including me for professional use) the most important part of the backups is the old backups which contain data that no longer exists on the other systems. This means that the backups should be on redundant storage here. At work I use either RAID-1 or RAID-10 depending on the size of the backup system. It is of course also possible to use RAID-5 but depending on your hardware you may not like the performance. If you do use RAID other than RAID-1 you should set the stripe size fairly small as most of the work for rsync backups is done at the file system metadata level not the large file level.
- Cross-platform handling of /dev and other device files: Since different operating systems handle major and minor numbers differently I suggest excluding /dev from the rsync backups. I keep a /dev.tar tarball on all of my boxes with a backup of /dev in it just in case I ever need to restore that. The tarball will be very small since there are no actual data in it. Note that this is completely unimportant on Linux systems that use udev for /dev.
- What is different between 2 backups: I wrote a perl script that scans 2 backups of the same directory and lists what has changed between them. I have published that script at http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/diff_backup.pl.txt
- Storing data that isn't kept in a file: I wrote a perl script that does backups of data that isn't stored in files such as partition tables. My main backup script runs this "getinfo" script at the start of a backup if it detects an infotab file telling it what to backup. The script is published at http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/getinfo.pl.txt. I also have an example of its tab file format published at http://www.sanitarium.net/unix_stuff/Kevin%27s%20Rsync%20Backups/asylum/infotab
- rsync --dry-run: This is rsync's testing mode. You can use this on any other rsync command to have rsync tell you what it would have done without actually doing anything.
- rsync --whole-file: This tells rsync to transfer entire files instead of using its block level comparison system. If you have a nice fast link (like a LAN) this can make things faster since rsync doesn't have to checksum files at all but if you are transferring across the internet you don't want this.
- rsync --checksum: Normally rsync compares the timestamp and the size of a file to determine if it has changed since the last backup. If you use --checksum rsync will ignore the time stamps and checksum any files that are the same size to determine if they are different. Obviously this adds a significant slowdown to the backup process. You wouldn't normally use this option however it is good to have if you believe your backup data has become corrupted in a way that doesn't affect the information you see in an ls -l output.
- rsync --size-only:
- rsync --sparse: This tells rsync to turn files with large chunks of null characters into sparse files as it transfers them. This is common for things like virtual machine images that have free space inside of them. Without this option such files will be larger on the backup than on the source.
- rsync --delete-*: There are several options that control when rsync does the deletion process. Normally you would just use --delete and let rsync use the fastest one available in your version of rsync however there are times when you want to force it to behave differently. As of version 3.0.6 --delete-during is the default for --delete. See the man page for more information.
- rsync --temp-dir: If you have a tmpfs mount you can get a very small speed boost by using this parameter. It causes the partial files used during the delta transfers to be stored in an alternate (faster) location until the file is complete. This will only help if you are doing delta transfers and if the directory you specify is on a tmpfs mount. Note that your tmpfs mount must be big enough to hold any single file or it will cause rsync to fail with an insufficient disk space error. Also, if your tmpfs mount goes into swap you will completely kill your performance. IOW, don't use this unless you are sure it is going to help. Also, note that the --inplace parameter is even better than this.
- rsync --bwlimit: Allows you to limit how much bandwidth rsync uses in its network communications. It is measured in KB per second.
- rsync --ignore-errors: This overrides one of rsync's built in safety features. Normally if there is a problem during the backup rsync will NOT run its delete pass. If you use --ignore-errors the delete pass will run regardless of any other errors. Note that this isn't as dangerous as it sounds because rsync with --link-dest should be operating on an empty directory with nothing to delete anyway and even if it does delete something it would not delete from the previous backup directories.
- rsync --max-delete: This allows you to re-implement the safety feature above with a threshold. You can tell rsync how many files it can delete before it decides that something must be wrong and stops.
- rsync --compress: This tells rsync to use zlib compression on its communications. This would be good if you are backing up over the internet but it is usually counter productive on a LAN. You can also do compression at the ssh level however rsync's is a little more efficient. Of course you should not do both.
- rsync --acls: This tells rsync to transfer ACLs in addition to permissions. Note that this is a compile time option.
- rsync --xattr: This tells rsync to transfer extended attributes (the ones you set with chattr) in addition to permissions. Note that this is a compile time option.
- push instead of pull: Rsync can push data just as well as it can pull it. It is possible to have all servers push their backups to the backup server instead of the backup server pulling the data from them. I personally don't like this approach because it means that all your servers have the key to your backup server instead of the other way around and because you have to engineer a much more complicated way of doing the rotations as well as making sure you don't have 20 servers trying to back themselves up at the same time which would flood the backup server.
- Buddy backups: If you don't want to dedicate a box to running backups you could pair off your boxes and have them backup each other. You could also do this in a ring layout.
- LVM Snapshots: It is often wise to use an LVM to take an instant shot of a file system and then backup that snapshot. This would remove any chance of a file system changing during the backup. As mentioned before this is the preferred method of backing up a database but it is also good for things like email servers.
- Squashfs for archives: If you want to make a permanent archive of a particular backup (perhaps to burn it) squashfs is a great way to do it. Squashfs creates a compressed mountable archive of a directory tree. You create a squashfs archive with mksquashfs which works much like mkisofs and then you can mount the resulting file as a loopback device.
- FAT: I do not recommend backing up to a FAT file system using rsync. However, rsync is perfectly capable of backing up a FAT file system. However, there are issues with how FAT stores time stamps. FAT can only store time stamps with a 2 second resolution. The easy fix for that is to use rsync with --modify-window=2. FAT also handles daylight savings time changes differently. When the time changes FAT file systems will appear to be 1 hour off. The easiest solution for that is to use rsync with --modify-window=3602.
- Sudo: It is possible to use rsync under sudo even on both ends. I personally believe that this is ugly, insecure, and an abuse of the sudo system but it can be done. If you run rsync under sudo and add --rsync-path='/usr/bin/sudo /usr/bin/rsync' then configure sudo to not prompt for a password when that user runs rsync --server it will work.