It's that time of year again; time for the annual "have you done your disaster recovery planning" post.
"Disaster recovery" and "disaster planning" are terms that have fallen out of favor in my lexicon, actually, because of a tendency I have noticed among small business owners to discount the concepts because, unless they have themselves recently been victims of one, they envision "disasters" as rare occurrences that mostly happen to other people. As far as it goes, this may well be true. What is not true is that the contingency planning that is produced in the process of formulating a DR response is otherwise worthless.
This year, at least in the Pacific Northwest, it's easier to convey that reality than in times past. With a spate of windstorms screaming through the region, a few days of sub-freezing temperatures and unexpectedly heavy snowfall, combined with a chronic inability of local governments and utilities to themselves maintain essential services, a lot of businesses have found themselves shut down or operating at a lower capacity than normal during the already shortened Thanksgiving week.
These things are not disasters as most people would classify them, but they certainly have impacted bottom lines. Electricity has gone out, staff cannot get in to work (or home again, if they did make it in!), freezing and flooding have trashed office spaces and equipment. If these are adverse conditions outside your control that impact your business, then they are things you should have a plan in place to deal with.
This brings up the dirty little secret of disaster recovery planning, which is that it does not need to be as big a project as a lot of consultants will tell you. Some very basic steps can be taken that will cover ninety percent of the uncontrollable impacts your business might suffer, and they are not especially costly to implement. Fancy things like disaster hot-sites, live backups, and duplicate data centers all have their places, but they aren't necessary for everyone and they don't represent your most effective solutions to the most common problems.
The things that I am going to tell you that you should do to prepare for problems are not going to look, then, like typical disaster recovery preparation. Instead, they are going to be different ways of running your business from day-to-day, which also will just happen to increase your viability in disruptive scenarios.
Don't run your own server, or if you do, don't run it on-site
Servers in small-businesses are decreasing slowly as people realize there are better alternatives, but consultants and integrators still push them hard, because they are good guaranteed long-term revenue. Many organizations flat-out don't need them anymore; networked printers, network-attached storage, or combination appliances like the Buffalo TeraStation handle the basics of file and print sharing at less expense and with less hassle. E-mail is better outsourced, as are many server-based productivity applications. If you have to run one (usually because you need a server-based productivity application that is only available that way), stick it in a data center somewhere and connect remotely. A data center, a good one, is going to be safer, more secure, and better protected against floods, fire, and electrical problems than your office. And if you can run remotely from your office, then you can run remotely from anywhere... making your disaster "hot site" any place you and your laptop happen to end up.
Allow staff to work off-site
Although this is usually seen as a cost-saving move that is controversial, it is also a terrific preparation for disruptive events. If you can structure your operations so that staff can work from home, then you effectively have enabled your staff to work from anywhere. If your office is destroyed or inaccessible, you're still in business. And by making this a regular business practice instead of just dropping in an emergency terminal server or installing GoToMyPC and hoping they will be useful when the time comes, you will find yourself forced to work out all the other operational kinks that can come with off-site workers... kinks that you don't want to only find out in an emergency
Integrate your recovery plans into your regular operations
There's a saying in the military, train like you fight, fight like you train. It means to make the plans you practice as close to the reality of a disruption as you can, and make your actions handling a disruption as close to your plans as possible. The best way to do this is to simply conduct your daily operations using the same systems and techniques that you might use to handle a disruption. There are a hundred ways to do this, from splitting production services between your main and "backup" server, to relying on the backup copies of your data to run reports from. This does two things for you; it eliminates the common problem of broken backup devices or services going un-noticed, because you use them regularly, and it forces you to be familiar and comfortable with the steps required to fail over to a backup if it should be necessary. Conventional DR plans call for testing restore procedures and devices. That never actually happens. You have to make it a necessity by integrating those procedures and devices into your operations.
Diversify your Internet services
You may have read some of the above suggestions and said "You're nuts... if a backhoe cuts my phone line, my staff can't get in, I can't get to my data-center based server, etc, etc." All true... which is why you need redundant Internet services. Once upon a time, the instability and low bandwidth of Internet connections was a good argument for having servers on-site. But improvements in service have done something curious; while people still tend to keep servers on-site, they have increasingly come to rely just as heavily on other Internet-based services. When the connection drops, they are no less out-of-business just because their e-mail server is under their desk... they still ain't getting the messages. This is true of all sorts of business functions, which means that having servers and staff on-site is no guarantee of productivity in the event of an outage. The truth is, you need that backup Internet connection anyway. A 4G MiFi to back up your DSL, a TeraBeam dish to fortify your cable modem... portable solutions are the best. Buy two. Let people use them. Or get a simple load-balancing router, like the Duolinks SW24 2PORT Dual Wan Load Balancing Router, and use it.
Now, for my own dirty little secret; making all this happen is, in its own way, just as complicated and difficult as all those complicated and difficult looking disaster recovery plans everyone else pushes on you. It's not going to be easy to do all this, and it might even cost you more in absolute terms.
The big difference is that it's not an ancillary cost when you do it this way; you are actually working on real, productive improvements to your business. You get benefits from this things every day, not just in an emergency. And unlike a set of dusty plans sitting in a binder somewhere, if there is an emergency, you know that your solutions work, because they work for you every day.
Earlier this year, Google CEO Eric Schmidt somewhat infamously said "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." This morning, we find out that Google didn't want anyone to know that they had given out an across-the-board 10% raise and $1000 bonus to all employees (why this is a bad thing escapes me) and have terminated the engineer who let the information slip out.
Of course, while it's funny, the hypocrisy is either over-simplified or flat-out un-clear; Schmidt might argue that there are still consequences to be had and you should be prepared to accept them, or you could turn around the statement to apply it more to the hapless engineer who somehow got caught making the leak.
It does, however, certainly serve to highlight the fact that Google is enormously secretive in certain respects, and that power hath its privileges... you can keep and take advantage of your secrets if you are Eric Schmidt, with your thumb on the throttle of the great Internet information vacuum cleaner. The rest of us unwashed schmucks had better watch our step.
I think I just single-handedly revolutionized programming languages for all time. I have invented... the "IF AND" statement!
You're probably familiar with the standard IF THEN ELSE block that is a staple in one form or another in just about any programming language. As I was jotting off a quick e-mail to a friend today, using the psuedo-code format that geeks such as myself often do in correspondence with others of our kind (or even less fortunate non-coding acquaintances, to their great chagrin) I wanted to request that he return to me one piece of information that I was sure that he had, and another that he might. I was going to write "send me x and/or y" to indicate that the second item was optional, but I realized that in fact that logical statement can be satisfied by either x or y; returning both is optional to satisfy it as TRUE.
Then it came to me; I wanted x AND (IF NOT ISNULL(y) THEN y). But that's a lot to type. I figured I could get the idea across with a simpler x IF AND y.
Of course, I have not yet gotten a reply so I don't know if it worked. If it does, I can think of a whole host of situations in which it would be useful. If it doesn't, well, I guess I will just have to fall back on LOLCODE.