Our Channel Alliance Manager, John Willsey, headed up our participation in InteropNet this year along with our Online Marketing Administrator, Will Boyd. In the name of interoperability, we added a number of new integrations and enhancements to our network management platform. There were a wealth of new events being generated through Trap and Syslog Monitoring and other cool new data sources like F5. Here are a few examples of what we did along with some actual screenshots.
Hacker notifications via email: The email is automatically triggered and includes the information below along with a link to the event.
First Occurred: 2013-05-14 19:27:28 PDT
Last Occurred: 2013-05-14 19:27:27 PDT
Organization: DEN Colo
Message: [multiple] Invalid user oracle from 184.108.40.206
Sent by Automation Action: SILO Only
Proactive notifications with any sort of WAN routing issues that would impact our customers (in this case, show attendees). All of these messages get routed immediately to our Routing Engineering specialist in the NOC:
WAN routing dashboard | Click to view larger image
A new integration with F5: Dave already discussed this integration on his Interop blog post, but it was an incredibly robust integration done completely on-the-fly at HotStage so here it is again.
DCM view of F5 LTM Logical Structure | Click to view larger image
Node Performance characteristics, such as traffic broken down by IPv4 and IPv6 to Interop.com web site(s):
Node Performance dashboard | Click to view larger image
A live map visual of the status of the Show Network which Will built that was a major crowd pleaser:
Show Network dashboard | Click to view larger image
A big thanks to NOAA who supplies the background for another dashboard from our network management system that was very popular. This one shows the status of our WAN/Internet connections:
WAN/Internet dashboard | Click to view larger image
Our fifth time managing InteropNet was another success and we look forward to doing it over and over in the years to come!
Tagged with: Interop
, Network Management
A couple weeks ago we sent a large team of “ScienceLogicians” to Interop Las Vegas. Everyone had a fantastic time, and as Dave already described, it was an extremely successful show for us (particularly because our network management platform won Best of Interop!).
Here is a roundup of memorable moments from a few of the team members that attended the show:
Rick Larson, one of our newest Account Executives, said, “Interop was amazing!” It was Rick’s first show with us, and as a remote employee based in Denver, he said it was great to spend time with ScienceLogic team members who made him feel very welcome. He definitely learned a lot at the show and met tons of new faces!
Our Senior Marketing Events Manager said, “What stood out the most for me was meeting one of our star customers who LOVES us and raved about how fast we were up and running making him money before the ink on our contract was dry.”
Rob White, our Director of Federal Sales had this to say: “I’ve been at ScienceLogic for four and a half years now and I’ve worked many trade shows during that period. This year was completely different than those in the past. We have so much swagger and there was a tremendous amount of intrigue from attendees. I even had someone walk up and say ‘we can probably never afford something like this…’ I like the fact that our perception has gone from Kia to Lexus in a few short years even though we’re still a cost effective solution.” (No offense to Kia of course!)
John Proctor, one of our Senior Product Managers, commented, “One of the things that stood out to me was that the ‘network’ is not getting any smaller. In fact it is growing. Whether it is the ‘internet of things’ that was discussed in one of the keynotes, the BYOD initiatives, or the visibility of IPv6, it is clear that there will be many more devices connected to the network.”
Erik Rudin, our VP of Business Development, stated, “Interoperability is at the core of what we do, as proved by our network management platform supporting InteropNet five years running. I had several conversations at the booth about interoperability between scores of vendors to build and support the largest temporary network in the world. That kind of requirement puts a lot of demand on your tool set to be configurable, adaptable, easy to deploy new devices, and interoperable with new technologies/API’s. Our ‘Best of Interop’ award was a validation on our approach to interoperability.”
Erik also said, “People were excited to see the ‘Best Network Monitoring System on Earth’ and they were astounded when they saw the breadth of device coverage and platform components (ex. ticketing, event management, portal, Run Book, topology, asset).”
We clearly had a fun and productive time at Interop! We look forward to repeating our success at Interop New York in just a few short months.
Tagged with: Best of Interop
, Network Management
, network monitoring system
As I recap ScienceLogic’s efforts and ingenuities delivered for Interop 2013, the list of items that I could talk about is quite large so this topic will likely find its way into multiple posts over the next few weeks. First of all, I have to congratulate the entire ScienceLogic team. As the great College Basketball coach John Wooden said many times courtside of a competitive game, “We play to Win, but if you want to be ‘the star’ then you will have to go and play for another team.” At UCLA the star was Coach Wooden’s team. That was his system. The team was the star. I intensely feel that way about winning the 2013 Best of Interop award for Mangement & Monitoring. It takes so many facets of our business to get it right with the product, with our customers, with our marketing / positioning, and with the vision of the company. It is rarely about one person doing something spectacular without the accretive assistance of a broader team. We are honored to have been selected as this year’s winner and I could not be more proud of the entire ScienceLogic Team. I am very grateful for the staff’s hard work and dedication. The current technology environment demands that ScienceLogic continues to redefine itself by investing at 5 times the industry average in R&D and applying our engineering strengths in innovative high impact ways. That is how you make a difference and, in time, that is how, from my perspective, you win these kinds of awards. A principled vision, intense focus, and the persistence to keep working on complex problems until you create novel solutions.
So that brings us back to the title. Interop is all about interoperability, and our mission from the first day we started the company was to provide a platform where we could examine a heterogeneous set of technologies that you find humming along often in multiple interconnected data centers so the context of their current health could be leveraged to understand the context of real time IT operations service delivery. For 5 years we have arrived at Interop and delivered our network management product + our engineering resources to rapidly manage InteropNet, often described as “the largest temporary network in the world.” Each year we are faced with hundreds of operational challenges, including interfacing with brand new equipment from new vendors (sometimes early release code) that has to work together with the fluidity that is expected to carry the demanding service levels that these events require. One new integration we did on the fly this year was with F5, another InteropNet sponsor (see one of our F5 dashboards below). What happens in the NOC stays in the NOC, but we can self-assuredly say that what we learned this year has made our product more resilient, smarter about the latest revisions of technologies in the backbone of the network, and more confident in our approach to proactively finding problems before they impact availability.
DCM view of F5 LTM Logical Structure
Over 1000 show attendees visited our booth, it was the most jammed / packed that we have ever experienced in our previous 7 years at Interop. The company brand is truly reaching a global audience with 20+% of the badge scans from international IT directors/organisations.
Interop 2013 was a great show for us. Given the success of the show for the entire team, we eagerly anticipate what is in store for 2013 Interop NY this Fall.
Tagged with: Best of Interop
, Dave Link
, Network Management
NoVA-Python is a local group for enthusiasts of the Python programming language. On the third Thursday of each month the NoVA-Python user group meets here at ScienceLogic to exchange ideas, catch up on each other’s work, and often to listen to a presentation about an interesting Python technology given by one of the group members. The meetings are a way for local “Pythonistas” to network and continue learning about Python in a relaxed setting.
At a recent NoVA-Py meetup, our own Brendan Mannix, Manager of Test Automation, gave a presentation on some popular test automation frameworks. He demonstrated four separate frameworks designed to automate application testing from the unit test level all the way through stress testing, with an extended look at automated user acceptance testing. While some meetups follow a formal presentation style, others can be open-air discussions, group hack sessions or even just grabbing a burger and cracking a few jokes about other popular programming languages.
Python is a core technology at ScienceLogic and we are proud to host the local user group meetings. It’s in our interest to promote the continued growth of these technologies and hosting the NoVA-Py meetups is a great way to encourage that growth while connecting us to the community.
The next meetup is tonight, May16th at 7pm, and the topic will be “Side Projects”.
To join the NoVA-Python community, sign up at http://www.meetup.com/NOVA-Python/ or follow them on Twitter @NOVAPython
Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together. Python’s simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. Python supports modules and packages, which encourages program modularity and code reuse. The Python interpreter and the extensive standard library are available in source or binary form without charge for all major platforms, and can be freely distributed.
Tagged with: Northern Virginia Python Users Group
, Side Projects
, test automation frameworks
Here are the rest of my thoughts from the recent Hosting & Cloud Transformation Summit (HCTS) with The 451 Group in London. Be sure to read Part 1.
Do Less with Less
Recent 451 addition, Tony Bishop, hosted Digital Realty in a panel that saw the wholesale data center behemoth admit to seeing smaller KW pools from customers with intensified expectations of one-stop shopping from their data center providers, and a need to be more in touch with the need of agile apps. BBC validated the trend by exemplifying its use of Salesforce.com while simultaneously refreshing its SAP footprint and classifying the data in that system for future manipulation. At the same time, the company leverages Google (1bn video users per month) to deliver video content to its online customers. The move to a hybrid model allows it to free up internal data center space by leveraging things like SaaS to replace legacy email and CRM platforms. However, as the company increases its complexity of suppliers, it has demanded a service catalogue and single sign-on from its vendors in order to reduce complexity to it end-users.
So does that mean everyone is trying to tie together more of the stack with facilities and app intelligence in order to do more? On the contrary, they’re trying to do less with less. An interesting concept from CohesiveFT, that resigned the legacy mantra of doing more with less as passé. The reason was that people want to connect fixed assets (large data center capacity) into variable assets (i.e. flexible cloud) driving the ability to do less (less burdensome work day to day) with a lot less (less responsibility and cost of underlying infrastructure in the long term). In summation, data centers were designed for operations, while enterprise IT, leveraging the variable assets, is being designed for purpose. As the HSBC representative emphasized, enterprises today must have a strategy around classifying services and their fulfillment models. The orientation needs to commence from business strategy down in order to get a better view of cloud and data center requirements.
How much to invest in future data centers?
As has often been discussed, the future role of the Operations Manager will inevitably turn to one of Service Orchestrator. The data center infrastructure continues to commoditize, and open source technologies are the proof point. Take for example, the Open Compute initiative that many believe is the signal for the end of the line for IT servers. As Chris Swan pointed out, with such massive Technical Debt nowadays, it is becoming highly efficient to reduce your technical debt through things like Open Compute Technology. Even companies like Google, that created Hadoop and Mapreduce for internal infrastructure efficiency, are now seeing tremendous take-up by their friends at Microsoft and Apple. And if Google is setting the stage for emulation, bear in mind that its self-built data centers house what Gartner considers to be the world’s 4th biggest server builder: Google itself. The concentration then for operators (whether enterprise or MSP) is to orchestrate deployment and higher utilization higher up the stack.
Best Execution Venue
As Service Orchestrator, or broker, the IT/Ops manager, it can get incredibly confusing to choose from over 270 Cloud Infrastructure (IaaS and PaaS) providers, and a myriad of SaaS providers with growth rates faster than any other segment of technology. And this has created a new breed of service providers, such as Gravitant. The company had a customer that tried AWS as its foray into the cloud, but was disappointed by the lack of meaningful SLA available. In leveraging Gravitant to choose between 5 different providers, including AWS, the company found that Terremark and Savvis were more utilized due especially to backup and DR requirements. The broker helped to mitigate the difference in charges and SLAs that can hinder, and sometimes fool, enterprise users.
It’s all about chatty apps
We live in an increasingly collaborative business-place, and these apps are no longer seen as friendly social media outlets for one’s random thoughts. Rather, the surge in popularity that apps like Lync have brought to the business place, have also brought additional demands too. One look at the number of PowerShell commandlets available for Lync, and there is a rapid understanding of the core function such an app can have on all business communications and collaboration alike. More critically, these collaborative apps are driving many of today’s architectures, and seeing much faster life cycles than previous disruptive technologies.
To exemplify the point, 451’s Alan Pelz-Sharpe took a look at the history of the collaborative marketplace. The market evolution started with document management & knowledge management systems pre-2004. Then around 2004 email systems of Exchange and Outlook exploded globally, as pivot points for collaboration and shared documentation, calendaring, etc. This was followed by departmental shared network drives that quickly became overwhelmed. After 2008 there was the viral growth of SharePoint in the market place, followed up in 2011/12 with cloud based file sharing and mobile synch. The point is that nothing goes away, but most industries are seeing disruption cycles getting shorter and shorter, meaning that tools must adapt to this market frame; teams that leverage tools that adapt to agility and can pivot faster than competitors in the market place will produce winning operations teams in the future.
Tagged with: apps
, Hosting & Cloud Transformation Summit
, hybrid cloud
, Service Providers
, storage-defined networks
, The 451 Group
Say goodbye to hosting as we know it. This was one of the messages that sprung from my panel at the recent Hosting & Cloud Transformation Summit (HCTS) with The 451 Group in London. My dramatic point was intended to illustrate the mammoth changes afoot in the Managed Service Space in response to a rapidly changing and consolidating cloud landscape. This was hammered home by the unsurprising uptick in rollups being seen by Telcos and ISPs of managed hosting providers, in order to give them a leg up in the impending cloud wars. As Peter Hopper from DH Capital calculated, there were 56 M&A transactions and 47 Private Capital Placements into hosters and Service Provider space in 2012, amounting to $6.4bn changing hands. No wonder then that so many enterprises, bankers, technologists, and service providers attended the annual event, all looking to ensure they didn’t miss the next big investment or cloud trend among Service Providers.
Are you Chicken Farming?
As Joe Baguley from VMware boldly explained it: IT departments tend to look after their kittens – giving them names, taking them to the vet when they’re sick and looking after their little workloads. AWS, on the other hand, is cultivating a culture of chicken farming – if one is sick, I kill the chicken and get on with the rest. The issue with this approach is that whoever denounces the cloud as broken is either not trying it, or has put their kitten in the chicken farm. The driver is that people are deploying kittens – but want chicken farm pricing. As enterprises become serious about the cloud, they seek the resiliency and fault tolerance that is critical to modern workloads which is why VMware feels that AWS will fall the way of IBM AS/400 that started with just as much a bang in the market place. However, the race is on, and demand will invariably drive AWS to attend to those same market needs. In the meantime, modern day brokers are helping people choose execution venues based on internal policies that best suit different workloads.
Other companies, such as CloudSoft, enable users to move apps around at the software/app layer, when, as the CTO put it, the wind blows the wrong way, and the chicken farm smells. But it’s the decision making prior to the portability and motility of those workloads that are evidently becoming key to any execution platform. It’s also why we, at ScienceLogic, are aggressively building the platform for these hybrid cloud executions that many of our MSP customers themselves are leveraging alternate venues on occasion, rather than forcefully using their own infrastructure when unnecessary. The decision to execute workloads in one cloud may be a long-term decision, but where inside that cloud (i.e. which geographic region, and best performing or most efficient cost zone) is just as imperative. As the Director for the UK Parliament’s ICT pointed out, the innumerous metrics now from any cloud platform needed to validate the performance of a workload require a modern kind of monitoring and management tool, in order to create any assurances.
Drivers of change in our cloud environment?
So what are the workloads that are driving all of the cloud discussions today? The tremendous growth of collaborative apps, and storage-defined networks, are both the pre-eminent drivers of cloud usage today. From a storage perspective, a number of deficiencies to the existing systems were pointed out. For example, most legacy storage systems were not designed for multi-tenancy, nor were they developed for virtual environments. Similarly there has been a lack of QoS in this sector since it is so difficult to predict or guarantee performance. Finally storage is hard to scale, and once you fill it up, you either end up with multiple systems as you scale or upgrade and go through painful migration. While physical storage costs are declining, the real cost comes in the form of managing this process, as the complexity continues its incline.
So how have MSPs dealt with these issues to date? Often Ring-Fencing is used, with an advanced reservation of compute storage alongside application profiling to ensure the availability of resources. This can be effective but costly due to the high overheads associated with this approach. What has instead emerged in the last 3-6 months is a software defined approach to storage. This is not necessarily new to RAID and how systems manage Cache, but there are new and interesting things starting to happen in the industry. Software defined storage in itself is not new. Vendors in this space have been making large margins off storage management software. However, there is a set of new disruptive technologies entering the market – such as flash optimized storage, object based scale-out technologies, and SSD providers like our partners SolidFire and Intel – all driven by software defined approaches.
Some of the attributes of a Software Defined approach as per The 451 Group:
- The software/storage runs on commodity (x86) hardware
- A software approach allows easier scale out of storage; such as the ability of CloudFounders to detach the storage from the physical hardware in order to move data around for scalability and DR purposes
- A unified storage layer – we are starting to replace the legacy storage silo’s speaking different protocols (such as NAS, SA, etc.) with a multi-protocol layer
- Leveraging open source – until today open source has had little impact compared to the services space
- API-based provisioning/management and integration, in an age of hybrid environments and toe-dipping in the cloud, to be able to point some data sets to public sources and others to internal IT storage
Ultimately things like de-duping, and having table arrays and flash drives is fast becoming the norm. The real challenges however, are centered around scalability, common standards for portability for highly fluid applications, and the QoS currently missing in the cloud storage plays, where IOPS is fast becoming the true measurement of the effectiveness of a cloud platform. To these points, there are increasing examples of very chatty and rapidly up and down-scaling aps, that are leveraging AWS S3 technology to such a degree that it is fast becoming the default standard in a market that is emulating its API in lieu of a formal interoperable standard. All the more need then for a higher-level control plane from which to manage all of these technologies.
Check back tomorrow for part two of my experiences at this year’s HCTS.
Tagged with: apps
, Hosting & Cloud Transformation Summit
, hybrid cloud
, Service Providers
, storage-defined networks
, The 451 Group
The data center has seen dramatic changes over the last several years. Virtualization, high speed internet links, and increased processor capability have completely changed the business model as well as the underlying technology. These capabilities have enabled new business models such as cloud computing, computing on demand, SaaS, etc. These business models create ever more pressure on the underlying network infrastructure to support these models. The following provides some of the major trends in the data center and the underlying technology that Cisco has deployed on the Nexus platform to support these trends:
1) Server Virtualization, along with increased processor capacity, has led to very dense compute models with hundreds of virtual machines on a single server which enabled these new business models. The dense compute models increased the need for redundancy and additional bandwidth (BW) in the uplinks from the server to the switch. To overcome this, Cisco created virtual Port Channels which not only increase the number of uplinks connected to a server, but also let these uplinks be split to two different physical switches. All uplinks can simultaneously be active thus increasing the network capacity as well as adding a level of redundancy.
2) VM Mobility – VMware’s vMotion enabled the ability to move a virtual machine from one physical server to another. It is this capability that really becomes important to manage work loads and provide on demand computing. Of course there were a lot of restrictions on this concept initially due to network and storage constraints. However, vMotion set the stage for VM mobility. In order to support VM mobility, several technologies have been created to realize this:
- Cisco OTV – Overlay Transport Virtualization – Cisco OTV encapsulates Layer 2 Ethernet traffic within IP packets, extending the LAN across data centers. This protocol enables vMotion across data centers. vMotion across data centers can be used for moving workloads to servers in other data centers where there is spare capacity, data center maintenance, disaster avoidance, etc.
- LISP – Locater Identity Separator Protocol – After a vMotion occurs across data centers, this protocol enables more efficient routing of packets by separating the location information and Endpoint Identifier information.
- TRILL – Transparent Interconnection of Lots of Links – TRILL provides some routing capabilities to layer 2 which enables large layer 2 networks to be created. Rather then having vMotion constrained to a POD within your data center, Trill allows for very large layer 2 networks allowing vMotions to operate anywhere within your data center.
- Cisco Fabric Path is a suite of technologies based on TRILL. Cisco FabricPath uses TRILL for the L2MP capabilities and also provides support for Non FabricPath switches leveraging vPCs.
- L2MP – Layer 2 Multipath – This is a generic term for technologies that enable multiple paths between devices at layer 2 such as TRILL. That is, currently the Spanning Tree Protocol (STP) limits the connections between devices to a single active link. L2MP overcomes these limitations.
- VPLS – Virtual Private LAN Server – This provides another mechanism to provide LAN connectivity across an IP/MPLS network. This can be used to extend layer 2 across data centers like OTV. VPLS does require mesh connections between sites and OTV does have some flooding advantages over VPLS.
- SPB – Shortest Path Bridging – This is an IEEE standard (802.1aq). SPB is an L2MP similar to TRILL. That is, SPB is a competing standard with TRILL for replacing the Spanning Tree Protocol. With Cisco’s support behind TRILL, I would bet on TRILL.
3) Server to Server Communications – According to Cisco, 76% of all traffic stays within the data center. This means a huge portion of the traffic is east-west traffic. This is a shift from data centers in the past where traffic was predominately north-south. That is, north south traffic is between a client and a server while east-west traffic is traffic between servers. Increased east-west traffic stems from application to application communications, application to database communications, application to storage communications, and then of course the large increase due to VM Mobility. The same technology to scale the layer 2 network also provides the communications paths to support east-west traffic. That is, TRILL provides support for many links between devices effectively providing mesh connectivity while making it easy to add links to congested areas.
4) Fabric Unification, Network Convergence – Both of these terms refer to the unification of the underlying infrastructure for SANs and LANs. Instead of having completely separate networks, cabling, and equipment for SANs and LANs, converged network utilizes Ethernet for both. This greatly simplifies the data center. Key technologies introduced to support this are as follows:
- FCoE – Fiber Channel over Ethernet. FCoE is INCITS T11 standard that specifies how Fiber Channel (FC) can be carried over Ethernet links.
- DCE – Data Center Ethernet. DCE was a term used by Cisco prior to the standard DCB work. DCE not only included DCB-like capabilities, but also L2MP technology.
- DCB – Data Center Bridging. DCB provides enhancements to Ethernet to support FCoE. FC requires a lossless network and Ethernet is a lossy network. DCB is a set of IEEE standards that enables FCoE. The key technologies DCB provides are as follows:
- PFC – Priority-based Flow Control. PFC provides a mechanism to enable flow control per traffic class on an Ethernet interface.
- ETS – Enhanced Transmission Selection – Provides the ability to specify how much bandwidth can be used by each class of traffic. This ensures that LAN or SAN traffic will not consume the entire BW.
- Congestion Notification – Provides ability to notify the source that there is congestion and the source should throttle traffic.
- DCBX – Data Center Bridging Exchange protocol. The DCBX protocol is used to discover and exchange information with DCB peers.
Tagged with: Cisco
, cloud computing
, data center
, server virtualization
Michael Barrett, Chief Information Security Officer at PayPal, was the lead-off keynote speaker on day #2 ½ if you count the first night as a ½ day. He spoke on password security in the world of “Internet of Things”. Michael started his keynote really focusing on the data of passwords. “Passwords really started back in 1961 with the Mainframes,” Michael said. It was a time when you would timeshare with workloads on those mainframes. The process to get access was as simple as seeking out the system administrator and signing up for workload time slots and you would be issued access keys at your time slot to start your work. Michael goes on to describe, and paint a picture of, the passwords of today in the life of a user. We all have more usernames and passwords than any one person should. Michael goes on to say because of the pain of having to remember usernames and passwords, we reuse the same set of keys over and over again making them more and more useless. I would agree with Michael in the sheer volume of passwords that I have myself; it is crazy to recall them all. I personally have used and looked at tools like LastPass to help address this need. However, Michael’s keynote started to shift not away from the password problem, but more towards a really unique way to solve the problem. He introduced FIDO Alliance as the key to a standard protocol for how to change the paradigm of passwords and put them to rest.
For those of us who weren’t, or still aren’t, familiar with FIDO Alliance here is a brief explanation from their website:
“The FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication devices as well as the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance plans to change the nature of authentication by developing specifications that define an open, scalable, interoperable set of mechanisms that supplant reliance on passwords to securely authenticate users of online services. This new standard for security devices and browser plugins will allow any website or cloud application to interface with a broad variety of existing and future FIDO-enabled devices that the user has for online security.”
Michael continued to expand the story by providing some examples of how one standard can be used but the implementation can be unique. Michael gave an example of this year’s upcoming cell phones that will have fingerprint scanning technology to leverage authentication instead of passwords. For me, the idea of simplifying my life while still maintaining the level of security that allows me to sleep at night is a future I can get behind. For more information on FIDO I would encourage you to check out their website here: http://www.fidoalliance.org as well as read this article that was written in Computer Weekly a few months back that really does a great job explaining the problem and solution.
Tagged with: Fast IDentity Online
, FIDO Alliance
, Internet of Things
, Michael Barrett
Fresh off our Best of Interop win in the Management & Monitoring category, we were up bright and early and ready for another day at Interop. Robert Soderbery, SVP & GM, Enterprise Networking Group at Cisco, kicked off the Interop Vegas 2013 keynotes with “Your Enterprise Network: Getting You Where You Want to Go”.
Rob talked about a major transformation happening today – for business, IT, and the network. Surprisingly, he wasn’t talking about SDN (that was the second keynote). He was talking about applications that are network centric, delivered by IT and impacting the business in new and interesting ways.
And of course, Cisco can help. Rob gave an example of what he’s talking about. The Bellagio, as part of MGM hotels, is a customer of Cisco’s, and just launched a “connected mobile experience” for guests. Basically, when you as a guest walk onto The Bellagio property, the application can automatically detect your mobile device, logs you onto the network, loads an app on your phone (I think), and engages you as a guest, offering different things that you can do. Maybe a little Big Brother – but I don’t think people mind that so much these days, as long as it’s actually valuable and targeted. This kind of network centric application enables a new guest experience, and the MGM/Bellagio management talked about it as a “personal concierge for guests”. That sounds much better than Big Brother.
Marketing to customers in real-time based upon their location, their preferences and what they want to hear
If any of you have gone to Las Vegas, you know just how in depth and targeted the marketing can get. These guys track everything, and not just track it but actually use the info to market to customers based upon history, experience, and preferences. This new MGM/Bellagio app just allows the hotel to market to customers in real-time based upon location, preferences and what they want to hear. This is very interesting and for this business, a very valuable way of taking that personalized advertising/selling that they have always done but to the next level, and in real-time.
Rob went on to talk more about the IT transformation Cisco sees going on now. For the last 25 years, IT has been about the back office. Now, it’s about the front office – customer facing, and making/building/distributing things around the world.
What will these changes and the technologies that enable network centric applications make possible for your business? And what role can you play?
Cisco’s Global IT Survey (some very fascinating numbers)
78% – the network is more critical to delivering apps than a year ago
41% – the network is not ready for BYOD
38% – the network is not ready for the cloud (I wish there was more detail on this because “cloud” is certainly not one defined thing)
42% – vaguely aware of network of things (I think we all could have guessed this one)
Cisco’s vision in the midst of all this change – simple, secure, reduced TCO.networks. And how will they get there?
“One plus one plus one”. One network, one management, one policy, so starting with a single data plane, where IT can operate the network as a whole, and have one place to see the relationship of the network to applications.
This is at the core of the Cisco vision to connect to people, to clouds, and to things.
And then in an odd leap to the NBA, Kyrie Irving of the Cleveland Cavaliers, showed up on stage. After being prompted by Rob, Kyrie said, “I’m always trying to improve my game and I think technology can help me do that. I’m always trying to connect with my fans.” I guess that might happen through network centric apps…someday.
And then there was a live game of horse – which was a nice fun addition to a keynote, something to talk about, and nothing to do with networking.
More to come from Interop Vegas 2013!
Tagged with: Cisco
, IT transformation
, Robert Soderbery
In the 4th Fight of the UCC (Ultimate Cloud Championship) we see two contenders from different backgrounds. You may ask why I matched these two up. CenturyLink/Savvis come from the Telco and Datacenter cloud space, whereas Citrix comes from the virtualization and software space. The similarity between these two fighters, to me, is all about acquisition and future investments.
CenturyLink is very interesting to me based on the list of acquisitions they have made over the last handful of years. To start as a fighter in the Cloud Championship, it certainly doesn’t hurt to be the third largest telecommunications company in the United States. To help maintain that 3rd position, CenturyLink acquired Embarq back in 2009. Embarq is one of the largest local exchange carriers in the United States, serving customers in 18 states and providing local high-speed data to business and residential customers. They then acquired Qwest Communications in 2010 and the Bell Operating Company Qwest Corporation in 2011. The Qwest acquisition gave CenturyLink a much bigger network and Internet backbone data services. Finally, the acquisition of Savvis in late April really sealed CenturyLink’s cloud play. Savvis has been selling managed hosting, cloud computing, and colocation in more than 50 data centers in North America, Europe, and Asia. The internet backbone and all the local exchanges and connections direct into business makes a great and easy public/hybrid cloud play for CenturyLink.
Citrix, like CenturyLink, also has been swallowing up major pieces of the cloud puzzle to complete their offering portfolio, as well as put them in a great contender position. Some of the key acquisitions for Citrix really started, for me, back in 2007 when they swallowed XenSource, which was the developer of virtualization product XenServer that was based off of the Xen Hypervisor. From there, they scooped up Vapps, VMlogic (automation and Management Company for virtualization), Ems-Cortex (product now sold as CloudPortal Services Manager), Cloud.com in 2011 and a handful of others in the cloud vdi and mobile space. The big question for me is, what is Citrix’s next move with Cloud.com? The site has been saying a “New Cloud is on the Horizon.” If I was a betting man, I would have to say it is a turnkey Cloudstack deployment that enables some flavor of cloud interoperability from your private deployment to scale out to Cloud.com.
You can see both Citrix and CenturyLink have a lot going for them to be real Cloud contenders in the battle for the Ultimate Cloud Championship. I still question if these providers are a little too late to really contend with the heavy weights like AWS. However, the one thing they both have is great technology and company acquisitions that can really propel them, if leveraged correctly.
Read the other UCC match-ups:
VMware vs Azure
Rackspace vs Terremark
AWS vs Google
Tagged with: CenturyLink
, Ultimate Cloud Championship