College data backup in the age of the cloud

Best practices for backing up data and retrieving it quickly when necessary

In just three years, enrollment at Lone Star Community College grew by about 50 percent. The six-campus system, located in the north Houston metro area, now has more than 95,000 students and has experienced explosive data growth, as well—from 40 terabytes to 1.6 petabytes.

A data collection that big is hard to imagine. But as of April 2011, the entire U.S. Library of Congress had amassed 235 terabytes of data, and a petabyte is more than four times that, according to Michael Chui, a principal at consulting giant McKinsey & Company.

The growth prompted IT standardization in the sprawling, decentralized system and an overhaul of Lone Star’s data backup process and technology.

Reliably managing backups on any campus is complicated by the need to offer a high-availability IT environment while accommodating a BYOD student body demanding greatly expanded data volume. Like other utilities, these services tend to be noticed only when something goes wrong.

Navigating these demands can be made easier by keeping in mind some of the following evolving best practices.

Defining recovery objectives

Officials must decide how far back recovery should reach and how quickly recovery must happen. They may also have to choose how much data they’re willing to lose in a backup failure.

“Whenever we look at backup recovery, every application, every chunk of data, gets treated that way,” says Link Alander, Lone Star’s chief information officer. “But everything we leave out there … can be recovered.”

At Oklahoma City Community College, business continuity and disaster recovery policies are established in consultation with individual academic divisions and operational business units.

They set service license agreement-like parameters for recovery point objectives and pre-identify applications and data containers that need to be brought back up, and in what order, says Rob Greggs, director of IT infrastructure. There are also plans for maintaining IT services even if, say, the campus got destroyed by a tornado.

Data backup terms to know

Backup policy: An institution’s rules and procedures to ensure adequate backups of files and databases so that they are preserved in case of equipment failure or other catastrophe. This should include frequent testing of the restoration process.

Backup site: A place where operations can continue after a data-loss event. Such a site may have ready access to the backups or possibly even a continuously updated mirror.

Backup window: The time needed to perform a backup procedure. These can slow system and network performance, sometimes interrupting the primary use of the system—hence the need to schedule backups for times when they will be least disruptive.

Disaster recovery: The process of restoring or recreating data. One of the main purposes of creating backups is to facilitate a successful recovery, a process that should be planned in advance and occasionally tested.

Full backup: The most comprehensive procedure, this backs up all files on the system.

Hot backup: This process allows users to make changes to the data while it is being backed up.

Incremental backup: Contains only the files that have changed since the most recent backup, enabling a quicker process.

Recovery point objective (RPO): How far back data recovery should reach

Return-to-operation (RTO): Time needed to get a system or network running again after data recovery.

Storage Area Networks (SANs): Dedicated networks that provide access to consolidated data storage.

Rethinking data infrastructure

As policies and standards change, IT leaders must adjust not only their procedures but also infrastructure design.

“It’s the evolution of the business application, the evolution of and the development of new business processes,” says Greggs. “IT is about doing more with less and doing everything about the business model in a more efficient way. But that evolution requires constant attention and review.”

Accelerating change in technology and user demands means no solution lasts long. Until recently, investments in IT architectures and platforms could be expected to last at least three, or, if lucky, up to seven years. Now, there’s no such extended shelf life. “It’s not a set-it-and-forget-it type of thing; it’s a definitely dynamic and evolving thing,” Greggs says.

And, of course, these technology upgrades must be done within budget.

Moving to the cloud

Keeping up with increased volume of traffic and shifting user demands requires maintaining operational flexibility, putting a premium on reducing complexity. This makes the idea of moving a portion of backup off-premise more attractive—to public or private (or hybrid) clouds, some of which are hosted and managed by outside vendors.

Georgetown University’s McDonough School of Business has managed to slash on-premise data backup by 70 percent, with most of that moving to a university-managed private cloud. Chief Technology Officer John Carpenter expects there will no on-campus backup within two or three years.

In his view, backup is now so commoditized—and reliable—that outsourcing is a logical choice to free up in-house IT talent.

“Backup has become plumbing, just like wireless and wired communications,” he says. “I’m basically farming out all of the aspects that are plumbing and keeping in-house the things that are either critically important to us or a critical application, or things that that are a little bit more cutting- edge, or in development or enhancements.”

The university’s central IT department stores the business school’s critical data, including student records. And though this is done in a private cloud, the business school can perform its own backups whenever necessary. This maintains the higher standards of service its students, future business leaders, have come to expect, including 12-hour restorations of their own laptops if needed.

The university as a whole is moving in the same direction of relying even more on hosted backup services. Carpenter says they are more robust than anything he, or any single institution, can reasonably afford to do in-house.

“I can’t think of a systemic reason data would be more secure on-site than in a cloud service, unless the idea is to completely cut it off from the internet as many intelligence services try to do,” says Carpenter, who has a naval intelligence background.

Redundancy is a critical component to a reliable cloud, whoever manages it. Lone Star operates a private cloud created by two data centers, with identical IT guts—same number of servers, same backup systems and the same amount of storage with simultaneous real-time transactions.

One has greater built-in redundancy: more commercial power sources and internet pathways than the older data center. That newer center can withstand Category 4 hurricanes and is equipped with five generators and 10,000 gallons of fuel stored underground. Both are far enough inland from the coast—and 35 miles apart—that the risk of hurricane damage is not considered great.

Internal fire is a greater threat. Major nearby road construction prompted multiple scares last year when fiber or power lines were cut, taking a data center offline. Still, says Alander, “there was no impact to service, no blips to customers.”

Avoiding regulation and policy pitfalls

Not surprisingly, laws and regulations, and sometimes university policies, are out of sync with the fast-developing advances in backup. A Texas law, for instance, requires Lone Star to retain its ERP on physical tape, which in turn is moved to a facility run by storage and information management provider Iron Mountain; however, this data, and everything else the university needs, is also stored on disk.

“The idea of backups was always—even in the worst-case scenario—that you could fire everything back up again, [but] these tapes contain only data. They don’t contain server info or server configuration or anything else like that,” says Alander.

Physical tape has some value, he believes, because it’s cheaper for long-retention data, such as student records, which need to be kept for 99 years, per state law. When it comes to recovering data there is no comparison: Disk recovery is nearly instantaneous; just locating data on tape can take hours.

Defining retention periods

Many institutions need clearer regulations on how long certain data has to be kept.

“I think this is something a lot of universities struggle with because we don’t have good categorization of data, so it’s difficult to figure out how long we should necessarily keep data,” says Alex Lawrence, senior systems administrator at Pacific Lutheran University in Washington. “You’ve got not only storage concerns but also legal concerns, because if you keep something too long that becomes a legal liability.”

Such issues are addressed at Pacific Lutheran by a committee that tries to keep up with best practices through Educause and by examining policies at other universities.

Roughly one-third of its most critical data and applications are retained on the campus, located just outside of Tacoma. That includes file servers, business databases, and student records stored on the Banner system—all of which is also backed up off-site as well.

Lawrence is focused on shoring up what have been weaknesses in the recent past, including backup verification. Officials recently decided to switch to one new vendor, in good part because of the automatic backup verification it offered. That takes care of the need for a separate license—at additional cost—and generates verification reports for every backup, every night, rather than the previous quarterly restores.

The operational simplicity makes it easier to quickly deliver flexible, scalable IT services. Support ing new classes, fundraising or other critical systems used to take weeks to set up but are now expected almost on demand.

It helps that the next-generation Storage-Area Networks (networks dedicated to providing access to consolidated data) are more flexible than ever. Increasingly, they include features for automated replication, duplication, compression, indexing and verification.

This has allowed Lone Star’s high-availability architecture to run without a dedicated backup administrator, just one full-time and one half-time SAN administrator. The school can dispense with local storage altogether.

It’s all further evidence of how rapidly IT technology changes. Just how individual IT administrators keep up is a matter requiring individual solutions.

Ken Stier is a Brooklyn, New York-based business and technology writer.

Categories:

Most Popular