Question about a Mysql backup script

Nalco

New Member
I am looking for a way to backup a couple databases on a daily basis automatically. I came across this script which looks decent:

http://www.mysqldumper.de/en/

And I was wondering if anyone had any experience with it. If not, anyone know of another such script that you like to use for such purposes??


Thanks!
 
Thanks for the replies!

I am going to try that script ppc. At first I was leary, as I thought I would have to install pear, luckily pear is already installed on my server.

Thanks!!
 
I would rather not use mysqldump, in my experience it sometimes needs too much resources when doing it but even if that doesn't or shouldn't happen creating one huge file to backup/download everytime is something I prefer not to do.
 
I would rather not use mysqldump, in my experience it sometimes needs too much resources when doing it but even if that doesn't or shouldn't happen creating one huge file to backup/download everytime is something I prefer not to do.

So you want an incremental database backup? That would be a lot more effort than it's worth.
 
So you want an incremental database backup? That would be a lot more effort than it's worth.

True, thats why I am looking at mysqlhotcopy with rsync, but first I have to make sure it works and then set up the proper rsync command/cron

I believe these reasons make it worthwhile:

1). One big advantage is that it creates a backup by hot-copying all database files with exactly the same file structure as your database. In the event of a serious database corruption you merely have to overwrite the original database directory with the backup directory.
2). Mysqlhotcopy runs very quick even on large forums.
3). When you have to restore from backup You don't have to spend hours re-creating tables from a huge dump file.
 
True, thats why I am looking at mysqlhotcopy with rsync, but first I have to make sure it works and then set up the proper rsync command/cron

I believe these reasons make it worthwhile:

So instead of a huge dump file you'll have an even huger zipped directory. I see the advantages of the methodology, but you're not going to avoid a large download.
 
But how about setting up rsync to an offsite location so only changed tables (files) are transferred/downloaded? Shouldn't that at least cut down on size a bit or maybe even more if some individual tables are not changed?
That is my understanding, obviously am not much of an unix expert, any thoughts?
 
But how about setting up rsync to an offsite location so only changed tables (files) are transferred/downloaded? Shouldn't that at least cut down on size a bit or maybe even more if some individual tables are not changed?
That is my understanding, obviously am not much of an unix expert, any thoughts?

I'm assuming you'd be relying on file modification dates to tell you what's changed and what hasn't, and they're unfortunately not the most reliable things in the world. It'd probably work for the most part, but if it's truly important data I'd rather have it all copied every time myself.
 
Top