When it comes to IT management tools, Industry Analysts highlight two different buying options: monolithic suites or disparate best-in-class tools. Over the last 20 years, the pendulum has swung back and forth between these two opposing approaches. However, both have major disadvantages.
If you want to drive a nail, get a hammer (don’t use your shoe). This philosophy is at the heart of the best-in-class approach. Each team is different and has different requirements so let them identify and buy the tool that best fits their need. Forcing teams to use mediocre tools hurts their productivity and effectiveness.
However, allowing every team to have its own tool can result in a very fractured IT management environment. No one can see the whole picture. The result? Finger-pointing, war rooms, and slow problem resolution. Vertically organized silos are misaligned to the business services that span across the technology domains. It can also be expensive to maintain all these independent tools and keep everyone fully trained.
Using this approach, it’s not unusual for enterprises to have anywhere from 10-40 different management tools.
Single vendor suites claim the advantage of consistency. A single vendor designs and builds the suite, creates integrated workflows and a common user interface. A single suite also results in easier maintenance and a lower training burden. You have “one throat to choke” when there is a problem and you get everything you need with a single, easy to understand quote.
Have you stopped laughing yet?
In reality, this is not what happens. Most mega-vendors (the Big-4) have cobbled their suites together through multiple acquisitions. Often, the integration is only surface deep. These vendors have spent years trying to integrate user interfaces, database structures, and workflows. Their commitment to last-year’s hot acquisition quickly cools as they move onto the next big thing. The promised integration frequently requires large consulting engagements and even though you bought into the whole suite, you still need to buy an additional module to get the latest feature. And then you need to upgrade – how long will that take?
It’s expensive, confusing, and you wind up with mediocre tools that are only slightly integrated. The Big-4 is where good products go to die.
A Better Approach
At ScienceLogic, we believe there is a third (and better) approach. Your teams should be able to access modern, best-in-class specialty tools of their choosing without the resulting operational chaos of independent domain silos. How do you get the advantages of a best-in-class approach with the operational efficiency of a suite approach?
What you need is a flexible, hybrid IT management platform that sits at the heart of your operational environment. An easily configurable and extensible platform that:
Supports all of the major technologies “out-of-the-box” (network, system, storage, cloud, virtualization, SDx, UC, etc.)
Provides role-based dashboards of your IT services and underlying infrastructure, so different teams get specialized views
Provides a wide set of delivered integrations with common best in class management tools. Teams get to use the best tools available
Provides a powerful and flexible open interface to add support for any additional specialty tools that your teams require
Helps you automate common administrative actions and operational workflows with adjacent management tools to streamline your operational processes
Allows you to adapt quickly to new technology, tools, and requirements
Can you afford to continue struggling with your chosen “best-of-class” or “suite” approach? Or do you want to learn more about how ScienceLogic’s hybrid IT platform approach is helping hundreds of organizations overcome the challenges of these diametrically opposed approaches.
It’s no surprise to anyone working in IT that 2015 was marked by significant strides and changes in how public cloud and hybrid cloud landscape. What does 2016 have in store? Check out our predictions below.
1. The Electorate Have Spoken – Azure and Office 365 for the People
Microsoft Azure will reach critical mass, gaining enterprise traction and market share from IaaS cloud leader Amazon Web Services (AWS), with the introduction of a slew of new services, to
compete with the innovation that AWS has displayed. We’re already seeing the shift in mindshare from a number of our managed service provider partners, as well as enterprises. Closely associated with the launch of packaged services around Azure platform, is a rapid uptick to Office 365. For CSPs fearing the outcome of the old ASP era, and being disintermediated in the SaaS boom, Office 365 offers a new opportunity.
In turn, Microsoft is adopting a customer lifetime value approach with recurring revenue over time vs relying on one time license deals, and with good reason. SaaS is the fastest growing spend element already by enterprise IT departments, according to the industry analysts. And that, in turn, is driving a need for greater agility in IT tools since the infrastructure is not owned internally, and change in mentality as IT become service orchestrators.
2. Bimodal Gets a Common Interface
Dual track approaches like Bimodal IT will become more and more prevalent as organizations try to become more agile. This will unleash a new wave of innovation within many companies. Not all deployments will be successful though. Organization that try to simply bolt on a high velocity, experimental application track will struggle if it isn’t part of a larger organization transformation. Bimodal approaches require cultural, structural, and technical evolution. In particular, a tools transformation is required to ensure that the new high velocity applications don’t simply become new independent silos for operations to manage. Bimodal organizations will need to build a common interface to manage both types of applications.
3. The Year of Software Defined Everything (SDx).
This is the year that Cisco ACI and VMware NSX will take off. These technologies will come out of the test lab and will be deployed across many data centers. Cisco will take the early lead here based on the critical mass of Nexus switches deployed as part of natural refresh cycle. Cisco’s will also leverage its massive enterprise and channel presence to push ACI. SDx creates a massive opportunity to mat IT more agile but it will require a simultaneous reevaluation of operation tools. Most legacy tools are blind to the new paradigms of ACI and NSX. Modern tools will be required to take maximum advantages of SDx.
4. The Cloud is Dead – Long Live the Federated Cloud
Relying on a single cloud is no longer the norm. More and more enterprises are acknowledging a mish-mash of SaaS, IaaS, PaaS, legacy infrastructure, private cloud, hosted and colocated execution environments. Amazon Web Services, Azure and SoftLayer are quickly becoming the default enterprise multicloud trifecta.
The greatest need we see is the management of these multicloud environments into a single federated pane of glass. Although much smaller than the two leaders in the IaaS space, we envision SoftLayer achieving critical mass as IBM invests heavily in the platform this year. We are predicting continued consolidation in the cloud space in 2016, as another significant top six cloud player exits the business (like HP and AT&T did in 2015) . We do not envision much upstaging Google or VMware in this area, given the immaturity of their APIs and lack of traction in the space.
5. Cloud Wars – The Empire Fizzles
ScienceLogic is naming 2016 as the “Year of the API.” It has become extremely evident that the way forward in the world of interoperability and integrations is through APIs. For those vendors that have not been API centric in their approach to tools creation, i.e. the Big 4 monitoring framework providers, it will be a race to the bottom, as most traditional players battle to keep up with rapid changes in cloud APIs. For those with inadequate APIs, it will certainly be a case of death by 1,000 APIs as the integrations will become a nightmare in services work – something becoming highly unappealing in the era of SaaS and agility. Winners and losers will be defined by the pace of API adoption, and ability to deploy easily in an otherwise complex ecosystem of technologies.
2016 is already shaping up to be a very busy year for the hybrid cloud industry. Do you have any industry predictions for the rest of the year? Tell us in the comments below!
This is a wild time to be in Enterprise IT. Business revenue is becoming more and more dependent on IT. Technology options are proliferating. The Business Units (BUs) want to move faster but often feel limited by internal technology. IT needs to adapt to this new reality. Can we borrow any lessons to accelerate this change?
Many people think of Service Providers and Enterprises IT as very different models. Historically, this was true. However, in a world that is moving towards “Everything as a Service,” Enterprise IT can benefit from some of the lessons that MSPs have learned.
Maybe Enterprise IT should start to think of itself as a true Service Provider.
Why? Consider the following:
Your users are actuallycustomers – MSPs need to fight for each and every client. They need to continually reevaluate their offerings and services because they know that their clients have other options.
Realize that you now face competition– Historically, Enterprise IT held a monopoly. The business was limited to the technology that IT provided and the pace that IT was willing to innovate. With cloud services (IaaS, SaaS and PaaS) available, the BUs now have many alternatives. Shadow IT is an outgrowth of this phenomenon.
How do Enterprises start thinking like an MSP?
Build a compelling portfolio of services – Rather than fight Shadow IT, embrace the concept. Identify the forces that cause users to look outside. Is it technology? Is it speed? IS it simplicity? Is it innovation? Wrap new technologies into a service that you can offer to your clients but do it better. Shadow IT with governance/compliance/management/security …
Be multi-tenant – There isn’t a one-size-fits-all model here. With the emergence of Bimodal IT, treat each application and BU as a different customer. Each client has different needs so you should provide custom dashboards and custom reports to each. Continuously demonstrate your value. Leverage automation and orchestration to minimize the overhead of maintaining many clients.
Drive innovation – Don’t wait for your users to ask for new technology and services (by that time it will be to late). As the technologist, you need to anticipate the demand. Become an entrepreneur and evangelist. Build your business.
Market your services – Don’t rely on a “build it and they will come” attitude. You are competing against huge marketing budgets. You need to advertise and sell the new internal services. Yes – you have a home team advantage but you still need to demonstrate your unique advantage.
Become a trusted advisor – As business revenue becomes more and more dependent on technology, there should be a technologist at the table. This evolution is an ideal opportunity for IT to become the primary source of technical innovation. IT can use this transformation be viewed as part of the revenue stream rather than just a cost center.
With a long history of working with both service providers and Enterprises, ScienceLogic has a developed a platform that is well suited to the new reality of “Enterprise as a Service.”
To learn more about our industry-leading monitoring solution, check out this short video.
Isn’t it great when your work is both fun, and interesting? NexGen Cloud Conference fits that bill.
On Thursday, December 10th, ScienceLogic Founder and CEO will be speaking on a panel at The Channel Company’s NexGen Cloud Conference in San Diego. (If you haven’t registered yet, better get on it!)
The panel, titled Disruptors and Game Changers: Meet Today’s Hottest Emerging Cloud Vendors, couldn’t describe ScienceLogic any better. Our spectacular growth in 2015 solidifies ScienceLogic as a key player in the Cloud Vendor space. However, the ScienceLogic platform can see much more than just cloud-based workloads.
This session at NexGen will focus on new trends in Cloud products. Below is the description of the panel per the NexGen Cloud Conference website:
A new generation of cloud vendors are redefining the way hardware systems and software applications are developed and delivered. They are also creating new market opportunities for solution providers to add value and help their customers achieve their business objectives. This executive roundtable session will discuss the changing dynamics of the vendor/partner relationship, and identify the keys to meeting the changing needs of customers in an increasingly competitive marketplace.
In the latest 7.7 release, the “tagging” capability has been expanded beyond interface tags to now include custom attributes. Custom attributes are essentially key value pairs that can be aligned to our most popular resource types. The top two are the device and interface entities.
Conceptually, this will enable users to either manually or, my personal favorite, automatically “tag” their inventory with data to filter, slice, or dice that inventory in new and exciting ways.
You may ask, “how could tagging inventory be exciting?”
Consider this example: A customer is using auto-scaled EC2 instances for an e-commerce application. Black Friday and Cyber Monday are looming on the horizon, and great care has been taken to configure AWS to automatically scale as web traffic increases. But how do you manage such a dynamic resource in an organized way? This is where using your AWS tags in ScienceLogic becomes a significant enabler.
Your AWS tags, with a small tweak to the Dynamic Application configuration, now become custom attributes in ScienceLogic. These custom attributes can be used in device group rules to dynamically populate groups. These groups show the relationships with a map visualization of your web application, which is what the web application owner wants to focus on during the busy holiday shopping weekend while the rest of the country is either shopping or watching football. There are also auto-scaling databases, middleware, etc. Each application owner wants their own view of their world.
As the usage increases, AWS automatically fires up more EC2 instances, increasing your capacity to handle load. ScienceLogic constantly updates the EC2 inventory looking for any performance instances of each EC2 Instance. Each EC2 instance created on the fly for each and every app has its own unique tags. As the EC2 instances are discovered in ScienceLogic, they are dynamically placed in the appropriate group for each business owner. In addition, these groups can also drive membership to ITSM policies.
As a result, users have a near real time view of the most critical applications as they auto-scale up and down dynamically. With proactive notifications at the element level, as well as health and risk, leveraging ITSM and optionally run book actions to make ScienceLogic customers’ holidays that much better!
Greetings from the beautiful Gold Coast in Australia where this week ScienceLogic is participating in Microsoft Ignite Australia 2015.
Microsoft has once again put together a wonderful event that brings together thousands of technical professionals to learn and network with one another. The agenda is packed with 4 days of sessions that cover both new and existing items from across Microsoft’s product and service portfolio.
Two of the major themes at this year’s event are Microsoft’s cloud services, Azure and Office 365, and the upcoming refresh of many of their long standing products that are due out in the coming year.
Yesterday I attended a session where Ewan MacKellar, a 10 year veteran in Microsoft Services, spent more than an hour demonstrating techniques for troubleshooting the most common issues that impact a user’s Office 365 experience. From latency and DNS to network configurations, it is clear that supporting cloud services require just as much visibility into the network and supporting systems as ever before.
Tomorrow, I am looking forward to getting my “First Look” at SharePoint Server 2016!
Feels like just last week I was writing about an outage for one of the major players on the web. Oh, that’s right, it was just last week! Well, its happened again, but to a different, yet equally powerful and technically savvy player, Google.
As a self-professed Google Drive and Docs lover, I found myself without a mainstay of my daily life for a bit today, when I couldn’t access and edit a Google Doc I was working on. After searching a bit (ironically enough using Google) online, I quickly realized that the service was down for many.
What’s interesting about this and last week’s outage, is that they happened to tech heavyweights. Maintaining a highly complex and dynamic infrastructure is tough and it takes some real talent and knowledge paired with great tools to do that. But you don’t have to figure it all out alone. If you’re interested in a list of the 20 tools you should consider to make sure your infrastructure stays up and running, check out this whitepaper.
On this “Outage Déjà vu” day I salute the infrastructure heroes who go in every day armed to battle the onslaught of network outages, storage overflows, and corporate users demanding ever more bandwidth to watch kitten videos!
Today, many knowledge workers found a mainstay of their productive lives unavailable – Facebook. Yes, to the horror of over a billion users, the world’s most popular social networking website was down for over 30 minutes despite having one of the most sophisticated IT infrastructures in the industry.
(Image via TechCrunch)
A company so advanced it redesigned its own software to increase energy efficiency should be nearly immune to downtime, or at the very least, should be able to rapidly recover – right? The reality is that this type of occurrence is not unique to Facebook. In fact, a large number of tech firms struggle to succeed at the very difficult task of ensuring availability, resiliency and performance of IT infrastructure.
In 2014, alone, we saw a number of notable IT failures by tech giants:
On January 10 Dropbox… dropped. And it stayed that way for nearly the entire weekend.
On January 24 Gmail became unavailable for nearly 30 minutes. Certainly not the colossal outage of Dropbox, but long enough to cause heart palpitations for Gmail users.
On May 16 designers were left with free time on their hands when Adobe’s Creative Cloud service went down for nearly 28 hours.
On June 19 earnings and productivity for several Fortune 2000 organizations experienced a major boost when Facebook went down. An experience similar to the one today.
On June 29 many corporate workers experienced something very rare – distraction free work. Thanks to Microsoft’s Exchange Online Service going offline for about nine hours.
And that’s only halfway through the year! So, what can we learn from these notable failures? All is lost and ensuring good IT service is impossible? No, not really. I think it’s more that IT infrastructures are incredibly complex. Understanding how all of the different elements in your IT stack relate is a must when providing good service. It isn’t just a must because users get upset when IT services are down, it’s a must because downtime can cause serious financial impact to the business.
Consider a rather crude and not very accurate cost estimate for Facebook. As of March 2014, Facebook made roughly $15,000 in revenue every minute. If one argued (again, with not a lot accuracy) that all of that revenue were subject to the availability of their service, this little 40-minute downtime cost Facebook $600,000.
To better illustrate the complexity of today’s IT infrastructure, let’s take a simple web application. If a user calls to the help desk complaining that they can’t get to their web application, is it the application itself that is experiencing a hiccup? Or is it the OS? How about the virtual machine it is running on? Could it be the bare metal infrastructure the VM is running on top of? Maybe it’s the network connecting the application to the storage it is using? What about the network connection between the end user and the application? See, it’s complex.
As another VMworld kicks off many sessions focused on the move to hybrid cloud. The pathway forward for VMware is clear stated throughout the show and show floors, “Ready for Any.”
The theme of this year’s VMworld comes with numerous enhancements to the vCloud Suite, and the rapidly changing vCloud Air service. Some of the key highlights and take away thoughts below:
The virtualization leader launched the VMware vCloud Hybrid Service, later renamed vCloud Air, late last year. VMware’s public cloud crossed the threshold from Niche to Visionary (According to Gartner) in this year’s Magic Quadrant on the strength of VMware’s dominance in the private data center, which provides a large, built-in market for hybrid services. The message was echoed in every session I attended, hybrid workloads are real for VMware. With vCloud Air, VMware has positioned the trending message of “lift and shift” IT more to the message we are an extension.
“Our Data Center is your Data Center with SDDC (Software Defined Data Center).” To be honest, I think it is a brilliant play for VMware. They are still the #1 leading hypervisor in the world. ScienceLogic customers have almost ½ million VMware workloads under management using our software. So making that extension play into vCloud Air seems like a very logical one. With all the announcements today with enhancements to vSphere 6.1 Update (Significant Scale Enhancements, X Multiplier to almost all aspects of previously defined limits) in the coming weeks and the rapid development cycles of every 15 days rolling updates to vCloud Air is a staggering effort trying to make the extension more of a reality each and every day. It is by far the best play for VMware. The challenge I see for VMware is inverse for AWS. AWS is technology and service speaking leaps and bounds ahead of the closest cloud provider. Example: June 11, 2012 AWS released ec2 auto scaling features leveraging identity access management controls. Today, VMware in vCloud Air road-map sessions revealed that exact same type of feature set being discussed to be released in coming builds.
Three years may not seem like that far behind, but in the technology word that is an eternity. If VMware wants to continue to win the hearts of the Enterprise and get them drinking the vCloud Air SDDC extension story they really need to deliver and deliver now. VMware is providing up to $600 of free credits for vCloud Air to get you try it and see if the Kool-Aid is for you. http://vcloud.vmware.com/
I had the pleasure again this year of attending Cisco Live last week in San Francisco. Last year, I focused on attending ACI sessions and wrote a short blogon what I had learned about ACI from Cisco Live. Last year, very few people at the show seemed to know what ACI was. In contrast, this year everybody knew what ACI was and many were investigating the technology for a future roll out. ACI sessions were packed with many I attended having wait lines for those who didn’t reserve a spot.
This year I thought I would write about our experiences working with ACI over the last 2 months. As a Cisco Partner, we built a monitoring solution for Cisco ACI. ScienceLogic provides the most comprehensive monitoring tool in the industry which provides visibility into the entire IT Stack. Support for ACI is just one more piece of the complex IT puzzle.
Some of the key items that really helped us develop a solution for ACI are as follows:
dCloud – We started our project using Cisco dCloud. This was a really cool virtualized lab environment with support for many technologies. For ACI specifically there are 7 different environments that you can select from. Full access to all components is provided via a VPN, which enabled us to develop over 80-90% of our solution. We needed to go to a physical system only when we needed the actual fabric in place so that the attached endpoints could be discovered. It is possible that this could also have been done with the simulator but not when operating in the dCloud environment since it was very clear that the LLDP from the server to the leaf switch would not be supported and this is needed for the leafs to discover the endpoints.
ACI Simulator – The ACI simulator is a fully functioning ACI system that simulates the APIC, with 2 leafs and 2 spine switches. The Simulator was fully functional and really enabled us to quickly learn the ACI technology as well as the APl. The simulator leverages APIC production SW so what works on the simulator works on the production APICs and we had no trouble moving from the simulated environment to a physical environment. The simulator also provided a mechanism to insert faults and alerts which really helped in integrating this aspect into our monitoring system.
APIC and the Object Model– The APIC is the brains of ACI. The APIC is the repository of the very complex but well-documented management information tree. The APIC provides centralized access to all the fabric and tenant related information. The APIC provides very powerful scoping and filtering capabilities that make it easy to get the exact data you need. Scoping capabilities allow you to specify the scope of the query. For example, you can query an entire subtree by identifying the class name and requesting all the children under that class. You can then further reduce the scope, for example, by specifying a subtree class and finally you can apply a filter to specifically pick the objects that meet any criteria you specify. Filtering allows you to select only objects that match your filter requirements.
Visore – Visore is an object browser that lets you retrieve objects by class name or class type. This tool was critical in developing the user stories for our development team. The tool lets you browse the management information tree moving by up and down the tree as well as exploring all the relationships between objects. The following shows a screen shot of the Visore browser. In this case we were looking for the Client Endpoints object class and Visore returns the 3 instances of that class. The ? instantly brings up detailed documentation about that object. The green parentheses allow you to click on them to either see all the children of the object or the parent of the object.
API Inspector – Since the APIC GUI relies on the APIC API, the API inspector allows you to see exactly what the APIC GUI is querying to display/update the data. This was another critical tool that came in handy to enable us to quickly figure out what objects were being use to drive the APIC displays. For example the first picture shows the APIC displaying the virtual machines that make up the EPG while the second screenshot shows the API inspector which shows the requests and responses that were sent to the APIC to generate the screen. This is incredibly helpful when trying to better understand the overall object model and what objects represent what data on the APIC GUI.
SDKs – Cisco does support a python SDK. However, we did not make use of that due to its memory footprint size. Instead, we used the REST API directly.
In summary, working on ACI was a pleasure. This was one of the most well-done APIs I have worked on. The API, along with several tools made supporting this complex technology relatively easy. I have to really give Cisco credit for not only building what seems to be a fantastic product, but also focusing on all the tools needed to easily integrate this product with other products.
Our blog’s authors aren’t just experts in their field, they’re also key contributors to our world-class monitoring platform. If you’d like to see how these topics play out in a real-world setting, please register for a free, no pressure demo: