Data backup is a critical part of an organization’s overall disaster recovery plan. The concept of data backup is simple: you make copies of your data and store them in a different location in case data is lost or destroyed.
Implementing a data backup plan is not necessarily as easy. There are multiple factors to consider depending on what your organization’s needs are. Here are some key things to consider:
- What are your total per hour costs?
- What is your “opportunity cost”? How much revenue do you bring in per hour?
- What type of data do you primarily produce? For example, office documents, images, publications, audio, video, etc? Or more data-driven, email, CRM data, web data, etc.?
- Where does your data reside?
- What is your recovery time object?
- What is your recovery point objective?
Answering these six simple questions can help you determine what type of data backup strategy your organization would be best suited for.
There are a ton of backup solutions on the market, which is fantastic. However, there are really only three main methodologies to data protection employed today:
Think of this as a picture of the entire dataset at a moment in time. This is useful in the event that there is a catastrophic failure, like a fire, a server crash, or some other major event that requires rebuilding the infrastructure. This method offers quick recovery but can be more expensive to deploy.
File and Folder Backups
This is what people normally think of when they hear backup. This is the one-by-one copying of files from their original location to another location. This is the most common way to do “backups” and is good for scenarios where files are lost, deleted, infected with a virus or some other scenario where you just need to be able to retrieve that copy.
This method is often the least expensive method but is time-consuming to recover if a lot of data needs to be restored. However, is very efficient for restoring small sets of files, which is the most common type of recovery needed.
This is a more complex combination of the previous two methods, where the entire dataset, including the server operating systems and all, are copied around to multiple locations in a way that ensures full redundancy of the data and your access to it. Certainly the most expensive, but the best method for ensuring maximum business continuity.
Every solution on the market is some version, or combination, of these three main methods.
To determine the best strategy for your business or organization you need to get the best answer possible to those first six questions.
Just an Example
Let’s say you’re a small organization of 10 employees. Your organization brings in $150,000 a month and your costs are $100,000 a month. Your team works an average week in terms of hours and most of your work is done on spreadsheets and word documents. You have a “machine” in your office that you store the bulk of your data on. That device is shared but everyone still stores the files they work on regularly in their my documents folder.
Sound like you?
Ok, so we answered the questions (from above), let’s do some basic math:
(Monthly Costs/Average hours per month) + (Monthly Revenue/Average hours per month) = Hourly Downtime Cost
The above scenario would look like this: ($100,000/180) + ($150,000/180) = $1389 per hour.
This is the dollar amount of a disaster. And while it’s a gamble (to say the least), the consequences can get even more severe than a dollar amount.
Hardware can be replaced. Software can be replaced. Data cannot. Not without some form of backup, disaster recovery/business continuity strategy. There are, literally, businesses that have closed or been severely hampered by not putting any focus or investment in business continuity, and no one wants to redo something they spent all day on. This is one constituent of your IT budget that you want to spare no expense.
Project Manager, ITonDemand