April 14th, 2010 by Larissa Fair, Online Marketing Manager
Who is the man behind InteropNet? Aside from the service providers, show manager, and participants who provide the data for the network – the man behind it all is Glenn Evans, InteropNet Chief Engineer. Any other time of year, Glenn can be found as a Principal at Acrux Consulting in London. But twice a year, he oversees the technical design and infrastructure of the InteropNet NOC. This is Glenn’s fourteenth (yes, 14!) year with InteropNet.
Previous roles have been as a volunteer, team lead and lead engineer. Glenn’s excited that this year is the first year the data centers will be up and running year round. I had the opportunity to chat with Glenn on the phone all the way from the U.K., before he heads back to Sin City next week.
Q: What’s cool about InteropNet this year?
A: The way we do it is cool – but in terms of individual technologies the remote datacenters are something different this year. We’re leveraging the ability to use those facilities to provide services year round, not just at show time.
One of the basic goals we’re looking at this year is to engage the attendee base in a more consistent manner. Being able to host the systems off the show floor year round is a big benefit to us.
There is an architecture change around this, we’ve separated out the network into a border/service provider edge network in those locations. What you see on the show floor is essentially an enterprise core/edge network
Q: This is the first year the data center will be running year round. Is it done by using more agile or cloud computing technologies? What are the benefits and challenges?
A: EM7 (for example) sits in the datacenter and it’s available year round. So we’re gathering statistics that across all of the data centers, around firewalls, upstream connections and of course the show when it goes live. This is during the show and also before and after. We’ll get to see the statistics and what’s happening to the network year round, which is something we’ve never done before. We’ll have statistics around what happens when we bring it up around the show, and also be able to compare the noise on the network before and after the show. We’ll be able to track attacks and see how that differs through other areas. We’ll be able to look at available bandwidth and at various threats on the network.
The other benefit is working with so many providers. We’ll be able to show different demonstrations of multiple products, all working together to manage multiple data centers.
Q: Speaking of multiple providers, InteropNet is always about interoperability. One of the challenges with InteropNet is the short timeframe available to set up the data center(s). How does this time constraint differ from a corporate data center build?
A: When enterprises design data centers they use equipment from maybe half a dozen companies, while we use 15-20 companies. There is certainly a challenge in the amount of participants we have in InteropNet. That’s slightly different from a standard corporate world. Corporate and enterprises are often constrained by budgets. You might have equipment A for one thing and equipment B for another, and it’s often based on cost.
We have the advantage of being able to pull all of the stuff together in and use the best solution available. We look at management, we look at troubleshooting, we look at cabling, and the amount of control we have over the datacenter.
Compared to an enterprise, which has the luxury of laying it out over a period of time, we have to evaluate the logistical challenges in a matter of weeks.
We have equipment on the show floor, and spread out across the conference space, plus the co-located data space. Managing all that alone is a challenge. Sometimes equipment doesn’t arrive when you expect it, or it’s missing parts, so you have to account for those issues. The advantage is we get to do everything at once to get it up and running.
Q: Is there distributed architecture for InteropNet? How does it work?
A: We have centralized some of the core infrastructure, we have distributed the Border Network to the Qwest CyberCenters. At show time, we’ll have some other things coming up around the show floor – management and collection of data across systems. Our server platform is spread across five locations. We are minimizing what we’re putting on the show floor. This is a cost-saving effort and also lessening the number of trucks that come to the show, so it’s a little more eco-friendly. This is the first time we’ve really focused on doing some of that, we can’t totally quantify the ROI or the benefits of what we’re doing, but hope to see some advantages.
We’ll be using the same co-locations for New York. For Vegas, we’re primarily out of Sunnyvale, CA and secondary out of Denver, CO. Interop New York’s data centers will be primarily in New York and secondary in Denver. Denver is overall our secondary facility. From a services perspective we use Denver as the Primary location From a connectivity perspective we pick the closest location.
Q: Are there any new technologies being used for InteropNet this year, that haven’t been in used in the past?
A: It’s more new concepts then new technologies. We are using different platforms to manage the network and its systems. The trick is to provide an umbrella platform that correlates and displays all the data.
ScienceLogic is the core management platform. We try to design for consistency, management is important in how the data center is run. It’s not a new concept, we’ve done it in the past, but we want to make it better. We’re trying Unified Communications for the first time. We’re looking at mobility and other aspects that are new this year. We want true collaboration and internal communication.
Q: What are you looking forward to the most with this year’s InteropNet?
A: For it to be over.
I want to see how all the concepts that we’ve come up with actually work in a real environment. Have we made the right decision? Do we need to make changes? We want to work with the providing companies, the attendees, and the show manager to make sure this is the best data center possible. And then we want to take that information to make it even better next year.