Staying Relevant to the Research Power User

Ensuring an institution’s IT infrastructure meets the needs of researchers both today and in the future

Higher education institutions consistently face pressure to satisfy the computing, storage and network requirements of their campus power users, the research scientists. A variety of technological trends are also putting added pressure on an institution’s infrastructure. With the advent of cloud computing, and public infrastructure as a service offerings available with the swipe of a credit card, how do higher ed IT departments keep up with the demand of these power users to stay relevant and provide high value services on demand? In this web seminar, originally broadcast on May 19, 2015, a leader from Arizona State University described how ASU is meeting the needs of its power users both today and in the future, and the key considerations for doing the same at any institution.

Director, Research Computing
Arizona State University

Arizona State University is ranked No. 4 in the world for U.S. patents among universities without a medical school. Our research expenditures are about $700 million a year. This year, we have discussed how research expenditures continue to move upward and how we have tripled them in almost 10 years. Furthermore, one of our primary goals as an institution is to make higher education available to all Arizonans, which, to us, also means making research computing available to all Arizonans.

The challenges we had faced in building a software-defined data center are the same challenges that most universities face: We’re trying to increase operational efficiency. We want to be able to leverage commodity hardware. We want to be vendor-agnostic—to be able to have similar devices and network devices that we can unify and control through one model that is programmable and modular. And, of course, we want to increase the amount of time available for research, and to prevent the Wild West atmosphere that could exist by opening up our network. When we looked at our initial workloads that were defined in the “Condo of Condos” model that is so popular, we saw problems with utilization. Essentially, users come in and they buy cores for a particular job, but it all goes against one cluster; that’s a one-size-fits-all model that doesn’t necessarily play to the operational efficiency that we’re aiming to reach.

So we’ve taken all of these resources, combined them into a research-as-a-service product, and also taken some of those workloads and moved them to the cloud to better utilize the resources that we do have. We’re now creating this robust SDN firewall called FlowGuard. We’re also building in a mechanism to take actions after a policy violation—even a post-policy violation—and essentially supply an action to it. Most of the network researchers find it appealing to be able to determine that there’s been a policy violation and then reroute that traffic to another set of hosts. We found this is far superior than our previous remediation solution. With the choice of Brocade, we got a few bonuses. One is the hybrid mode to minimize the cabling, which is a big plus for us. Also handy is that SFlow provided some traffic management while we worked out the components of our DDOS mitigation. And one thing we didn’t expect is that Brocade had the best cost per 100GB and 40GB ports amongst all the vendors we looked at.

We go around campus and tend to find a lot of HPC users who have been doing something a certain way for some time and they don’t know why they’ve been doing it that way. It’s just the way things have been done. If you can break that mold, you can do some fantastic things and serve your community in a better way. You do have to think out of the box a little bit to supply these kinds of solutions. Brocade is a great partner to do it with because they can give you a solution that works right out of the box—you don’t have to bolt on too many pieces to get it up and running.

Director, Solutions Marketing

One of the main issues that CIOs from different universities around the country face is satisfying the insatiable appetite for the researchers on campus. It’s a usual refrain that we hear—stories about researchers saying, “Why does it take a week for me to transfer my data set to another collaborator who I’m sharing with across the world? Why am I having to ship disks from my office to another location across the country or across town because it takes so long for me to transfer my data?”

The old IP served our enterprises well for almost 20 years, but the rate of technology change over that time period has been staggering, with smartphones, cloud computing, the internet of things, social media, and the explosion of the amount of data that needs to be managed, analyzed or stored. But the underlying technologies that support these new ways of doing business haven’t evolved at the same pace. The growth of cloud computing (and the data storage needs that go along with it) and the rise of the mobile workforce are profoundly affecting data centers and IT organizations everywhere. The consumerization of IT has conditioned faculty and students to expect resources on deterdemand, always on, and in a self-service manner, regardless of where the user is or where the data resides. These expectations have resulted in an IT relevance gap—that is, a gap between the users of technology and IT’s ability to quickly and cost-effectively deliver those easily consumable services. In order for IT organizations to close that relevance gap, we need to offer services using the same flexible management model that external service providers are using. The new IP, like Arizona State is embracing, is enabled by software-defined networking, network virtualization, process reengineering and cultural shifts in the way universities think about IT. This enables a more agile, flexible and configurable network, and it leads to closer alignment between the IT department and the users at the university.

The new IP is open, but it’s more than just open. It’s open with a purpose. The openness accelerates the rate of innovation, reduces vendor lock-in, and reduces cost and complexity. The new IP is also integration-centric and software-enabled to improve time to value for the customer experience. For the past 20 years, any networking innovation has been extremely limited and at the discretion of the dominant vendors, but the new IP provides a platform for innovations to be developed by the end user. The new IP goes beyond single-vendor limitations to allow universities to keep pace with innovation by tapping into and building upon a vast pool of resources. It’s an ecosystem, not just a vendor-driven feature set. And the new IP is evolutionary. You are not forced into a disruptive rip-and-replace model or greenfield solutions. With the new IP, you evolve at a pace and approach to match your timeline, your budget, and what your users demand.

The new IP also helps your organization move from a world of static constrained resources to a place where IT departments position themselves as a trusted provider of services who can quickly deploy and manage those services wherever and whenever to best meet your business objectives. Brocade’s Intelligent Flow Management Solution addresses two significant problems that are commonly seen at research universities. One is large data transfers from known, trusted sources to destinations that take too long. There is a lot of collaboration that goes on between researchers across the world, and these data transfers could take days, if not weeks, to perform. Speeding those data transfers is a good goal. The second is operationalizing science DMZ with commercially supported solutions. The benefits of the Intelligent Flow Management Solution are that it’s modular, it’s open, and it’s software-driven. You can extend flow optimization down to individual requesters or users on the campus. Ultimately, it reduces the transfer time of these large data flows from days to minutes.

To watch this web seminar in its entirety, please go to


Most Popular