Portal Home > Knowledgebase > Articles Database > Incremental MySql Backup


Incremental MySql Backup




Posted by WishIwasntH1, 04-14-2009, 09:26 AM
Does anyone have experience in backing up mysql incrementally? From what I've read in the docs it's possible using the binary logs but I haven't been able to find a good resource on how to make this work. I have a database that is over 5GB. There are a few Myisam tables that are insert/select only and one innodb table that receives updates/deletes/selects/inserts. Ideally I wouldn't have to backup the 5GB every night, I'd prefer to only get the items that have changed. If I could make this work, then I could also get backups more often rather than once a night.

Posted by vpsville, 04-14-2009, 09:51 AM
If you are using ISAM table types, an rsync of the mysql directory should work fine. That will give you incremental backups which will be a lot faster than copying 5GB every night! If you are using InnoDB tables you'll have to use specific tool for the job, and then possibly rsync the output.

Posted by WishIwasntH1, 04-14-2009, 10:15 AM
Thanks! Any specific ideas for tools that work on innodb?

Posted by vpsville, 04-14-2009, 12:21 PM
Its complex, since innodb needs to save the ACID data for rollback, you can't just do a snapshot like you can with normal myisam tables. I think there are some tools on the mysql site for making 'hot' backups (cold would mean shutting down the DB server).

Posted by mchristen85, 04-14-2009, 12:43 PM
I prefer using a master-slave replication setup for InnoDB based databases and do my backups on the slave server. Easy to do a 'hot' backup by doing a 'cold' backup of the slave server. Even if the master server fails during the backup, once the slave finishes backing up it will play back the logs(it was still receiving them, just not playing them back) and you lost no data.

Posted by WishIwasntH1, 04-14-2009, 02:29 PM
awesome, thanks guys. The master/slave solution is the one I was looking at, but it still leaves the fact that I have to download a multi-gig file for the backup everytime opposed to being able to just pulled a few MBs at a time. The other thing I was thinking is that after the master/slave is setup, I could just have a script export out the rows that have changed (since most of the tables have timestamps on each row). Then I could back up those files and combine them on the backup server. Does seem like a few extra (possible error prone) steps so if there is a tool that can automate it better that'd be much easier!

Posted by mchristen85, 04-14-2009, 02:33 PM
Just host your slave server locally if you are worried about that. You can have multiple slaves per master too so you can host one slave externally for the reliability of always having replication but also replicate locally and do your backups there. I run a master-master cluster to sync up data centers and have a slave server local in our office that runs our backups to tape. Really depends on how your current setup is designed on what you can do. Our databases is rather small so our backups only take a few minutes from start to finish before it's ready for the tape.



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
Cloudlinux down? (Views: 815)
need ideas ... (Views: 638)
SolutionRoad (Views: 620)

Language: