Simon Willison put together an excellent short-and-simple backup script over a year ago now, and I've used it intermittently to make backups. It takes files, whole directories, or the output of shell commands, and wraps it all up into a datestamped gzip file, before sending the whole thing up to an Amazon S3 account. It's a nice little piece of kit, in other words.
I found that it worked mostly fine, although it was (a) fiddly to run as a cron and (b) threw occasional errors which I wanted to fix. These two things together meant I wanted to do some proper development on the script and keep track of it.
I wrote a wrapper to help setting up cronjobs, which means it can be scripted using config files in the user's home directory and . I then decided on a whim to commit all of this to github , which is the online home for a community that's been set up around the git versioning tool. As befits a DVCS tool, git has spawned a community which is multithreaded and agile and most of all very friendly, with all free repositories available to the public and everyone able to fork off everyone else.
The repository was still a bit empty, as out of respect for the original coder I didn't commit backup_to_s3.py with everything else. Then, very kindly, Simon agreed to let me bundle his script with everything else, so you can now check out (clone) the "Configured S3 Backups" repository and follow the README to get scripted backups up and running fairly speedily.
I'd love for people to give it a try and offer some feedback. I definitely want to squash bugs and would also welcome new functionality. But the joy of git and github is that if I don't want to implement any extras, then other programmers can go ahead and