So, you’ve mastered how to leverage ServiceNow’s actionable data to resolve incidents and become a trusted source for IT assistance and resolution. That’s a respectable status to reach.
But, imagine if you had a real-time feed of performance data to your ServiceNow platform. Your root cause identification and time to resolution stats would be off the charts. You would become the trusted source for IT assistance.
You may even become known around the office as Usain Bolt – because you are just that fast. Even on your worst day, you’re still the best.
We know our customers benefit from having organized, actionable data at their fingertips. With this in mind, ScienceLogic has ramped up support for our customer’s ServiceNow incident, event, and CMBD requirements.
Our integration with ServiceNow is designed to present the information that matters in an easily consumable way. This allows you and your team to spend less time making sense of the data, and more time addressing incidents.
How can our integration help you reach Usain Bolt status? Here are some highlights:
Cut Costs by Reducing Incidents in ServiceNow – Events are automatically correlated and de-duplicated, and then incidents are logged in ServiceNow based on these cleaner events. Cleaner events mean less infrastructure related incidents in ServiceNow. Less incidents enable customers to pinpoint problems faster, ultimately leading to cost reduction.
Enhanced Coverage to Support the Cloud – Our monitoring coverage is both broad and deep, making it easier than ever to see your ServiceNow coverage across cloud-based and hybrid IT infrastructures.
Support for Emerging Technologies – Whether you’re adopting new technologies or leveraging highly reliable ones like satellite systems (or both), ScienceLogic’s flexible and easily configurable platform provides the support your infrastructure needs, and the data feed to ServiceNow to ensure consistent performance.
Added Intelligence and Increased Accuracy of ServiceWatch – By automatically logging incidents or events as they happen in the infrastructure, the ScienceLogic integration ensures that ServiceNow’s ServiceWatch more accurately reflects the health of IT services in real-time.
Drive Operational Efficiency with Auto-applied Monitoring Policies to Configuration Items (CIs) – Monitor the performance of every CI added to the ServiceNow CMDB by automatically applying the right monitoring policies to every CI discovered using ScienceLogic’s integration. This ensures that the right metrics and analytics are applied to the correct technologies every time, increasing operational efficiency.
Accelerate and Streamline ServiceNow Implementation and Management – Our single sign-on platform allows users to consolidate and integrate up to 14 different monitoring solutions, drastically reducing implementation complexity with ServiceNow.
Dynamically Enhanced CIs in the ServiceNow CMDB – Now CIs have added configuration, intelligence, and analytics information in the ServiceNow CMDB. That means ScienceLogic updates specific CI attributes in near real-time, as changes happen within the infrastructure. Our platform will intelligently discover and add CIs to the CMBD, a feature that traditional discovery tools are just not capable of, giving ServiceNow customers a complete representation of their infrastructure in their CMDB.
Expanded Customer Support Investment – We know that even you, the trusted source for IT assistance, may have questions, requests, or need support at times. That’s why we have enhanced our customer support for this integration. We also have a dedicated development and product management team tasked with deepening the functional capabilities of the ScienceLogic platform with ServiceNow.
Want to learn more about our integration with ServiceNow? Watch the video below for a complete overview.
Industry pundits have long proclaimed that capacity planning is dead. After all, we can now add capacity on demand by bursting into the cloud when demand rises. Are you worried about the impact of Cyber Monday on your eCommerce site? AWS, Azure, and others make the issue more about budget than available resources. And it’s often a better financial option than investing in full-time capacity planners.
Today, organizations place greater emphasis on real-time capacity analytics.
What’s the difference between capacity planning and capacity analytics?
Traditional capacity planning models the long term needs of the business. Will I have enough physical space in my data center next year? How will mass adoption of our new application impact annual power and cooling requirements? Capacity planning tends to focus on data center consolidations/relocations or major technology uplifts.
On the other hand, capacity analytics focuses on two critical items:
Avoiding disruption to current applications and services – Do I have enough server, network, and storage resources to meet the demands of my customers today?
Making better use of existing capacity – Can I allocate existing bandwidth or idle VMs to higher priority needs? How can I avoid wasteful spend associated with over-provisioning?
Let’s think of this in terms of personal financial management…
Capacity planning is akin to saving for college tuition, planning to buy a house, or knowing if you can retire at 60 or 70 years old.
Capacity analytics equates to managing a monthly household budget. Can you afford to pay the rent and other household bills on time? Do you have enough left over to splurge on that new 4K television?
Organizations today concern themselves more with understanding real-time demand than managing supply limitations. Capacity analytics takes a modern approach towards tackling this challenge. It puts predictive analytics at the fingertips of operational staff.
Consolidate Capacity Tools
There are hundreds of capacity management tools on the market today. However, IT teams often use different tools for monitoring server, network, and storage resources. This forces them to either export that data into a centralized data warehouse and/or deploy another set of agents to collect data for capacity planning purposes. Once extracted and normalized, capacity teams can analyze the (out-of-date) infrastructure data in spreadsheets or more sophisticated capacity planning tools. And then make decisions about future capacity needs.
This antiquated approach to capacity management is cumbersome and time consuming. More important, it’s not as relevant to the immediate needs of the business.
ScienceLogic has solved the need for centralized, real-time capacity analytics. A single platform provides many regression methods (exponential, linear, logarithmic, seasonal, etc.) for understanding real-time capacity demand. Planners can even establish their own desired algorithm within the system and run it against any data set. There’s no need to go outside the ScienceLogic platform for capacity analytics.
Operations teams don’t need to be capacity experts. Built-in reports show which resources will most likely expire in the next 90 days. They also reveal unused physical, virtualized, converged, and cloud capacity across various platforms. For example, you can see which VMs are sitting idle because they’re no longer serving active IT projects.
Many channel partners I know claim to “own” the customer relationship. That is their value add to their vendors. But do they really? Or will the customer drop them in a heartbeat for an extra five points of discount?
I have seen resellers wine and dine their customers until the exhausted IT managers were bleary-eyed from being up all night in Las Vegas. Is that really how technology decisions are made? It would be naïve to say that no technology buying decisions are made that way. But I’d wager that the party hosts are not getting as much bang for the buck as they think they are. Their employers certainly are not getting their money’s worth.
So how do VARs, solution providers, or system integrators lock in their customers? Services, that’s how. And a strategic, mission-critical managed service works best.
A recent survey of about 600 channel partners by CRN bears this out. The study shows that strategic service providers demonstrate many key advantages over their Neanderthal channel brethren. Strategic service providers are:
Grow revenue faster, have 1/3rd more revenue overall
Have deeper, more strategic client relationships
Have healthy financial statements (and thus higher company valuations)
These strategic service providers are right down the middle of the ScienceLogic fairway. Our partner program, called ChannelLogic, helps build VARs into solution providers into strategic service providers.
How does a VAR or solution provider become a strategic service provider? ChannelLogic can show you.
First, offer your customers the option to purchase perpetual or subscription licenses. This gets you in the door. If a customer purchases the license, they can still hire you to manage the ScienceLogic platform. You can create customized reports and dashboards, and if you are really good, critical integrations as well. Or offer ScienceLogic as a managed service. We can show you how to create a managed service or if you already offer a managed service, how to increase the monthly recurring revenue.
A second way your company can become a strategic service provider is to provide or protect the customer’s crown jewels – its core IT infrastructure. Without the core plumbing, no one gets applications, data, and their productivity grinds to a halt (like Friday afternoons when the email software was updated, in the old days at Oracle. You could set your watch by it). If we could show you how you can improve your performance assurance, reduce your Mean Time to Repair or replace, even predict when your current infrastructure was going to need to be replaced or upgraded, wouldn’t that make you indispensable?
Lastly ScienceLogic’s technology application can be integrated with many other products, such as ServiceNow, Nutanix, and SAP. These integrations create a stickiness that only custom code can provide – without the requirement to open the code for every new code version.
I had an inside rep who worked for me who would tell his prospects, “I am going to make you a hero.” What he meant was that the IT manager’s employees would be so happy with the new system they were buying (a telephony system, in this case) that they would declare the IT person a hero for improving their work environment and boosting their productivity. For the most part, hero’s they became.
I look at these strategic service providers as superheroes. More revenue, more strategic services, more customer loyalty, higher customer satisfaction and larger company valuations. Sounds super-human to me.
Steve Kazan is the Senior Director of North American Channel Sales at ScienceLogic. He has developed and grown B2B Technology channel programs for over a decade. For more information about ChannelLogic, click here.
As Jeremy mentioned in this blog post last week, one of the most challenging parts of a Product Manager’s job is prioritizing the work. Dave Link, ScienceLogic CEO, has often quipped, “There is no shortage of great ideas!” I have found this is indeed the case. Rarely a week goes by when someone doesn’t share a cool idea for an enhancement or new feature. Unfortunately, we are like everyone else in that we have a finite amount of resources to get work done.
In the Thunderdome, the product managers work together to best allocate the resources we have; all the while mentioning how we could always use more. We pit projects against one another based on metrics like total addressable markets, revenue and customer impact, level of efforts, and resource availability. We also must consider feedback from stakeholders like executives, engineering, support and our customers. Although we know it is impossible to give everyone what they want, we strive to find the right balance between building new capabilities, enhancing existing functionality, fixing bugs, refactoring and paying off technical debt.
For when it is all said and done, resource allocation is a zero sum game. When you assign resources to one project, those resources are no longer available to work on another project. During the PM Thunderdome we battle it out to find the right balance between all the competing priorities. All the while knowing that each time a project wins and resources are aligned we instantly have fewer resources remaining to work on everything else.
I often refer to our resources as “one pizza.” You can decide how big of slice to cut, how many slices to cut and how many slices each person gets, but you can’t change the fact that you have only one pizza. If you cut more slices then each slice gets a little smaller but you can give a slice to more people. If you cut fewer slices then each slice gets a little bigger, but there are fewer slices to go around. In reality, it is a little more complicated than slicing pizza because all the slices are not the same and all the people wanting pizza need different portions.
Some projects are larger than others and the resources all have different skill sets. But regardless of how you slice it, it is a fixed amount of pizza; one. And if you are thinking there might be some leftovers that you can save for another days, then you clearly have not had the pleasure of attending a ScienceLogic pizza party where there are never any leftovers!
One of the most challenging parts of product management is determining the order or priority of features and projects. Like anyone else, we have a fixed amount of resources that we must use wisely to get things done. In a recent internal discussion, the Product Management team was asked, “How do projects get prioritized and resourced?”
I am a big fan of MMA, so I joked that these decisions were made in “The Octagon.” And if this were true, it would make my life much simpler as I have a few pounds and inches over most of the other PMs. My colleague John, on the other hand, had an even better response.
PM Thunderdome: Two projects enter, one project leaves!
John and I are both big movie buffs, so I immediately knew the movie reference and fell in love with this idea. It is the perfect analogy for our process. For those readers who aren’t familiar with this reference, let me explain.
In the 1985 movie classic Mad Max Beyond Thunderdome, there is a legendary fight scene which occurs inside a giant metal cage call the Thunderdome.The Thunderdome is used to keep order in Bartertown by bringing disagreements to fast and definite end. The rules, well there are no rules, except that it is a fight to the death in which two combatants enter the arena, but only one leaves.
The scene pits the hero, Mad Max, against a larger stronger opponent, Blaster. The scene also features Tina Turner in her role as Aunty Entity, as she orchestrates the battle to regain control of “her” town.
Now, this is not to say that prioritizing the backlog is comparable to a barbaric death match between PMs in a post-apocalyptic wasteland. It is more an illustration that we cannot work on everything at once so we must make choices, which are sometimes difficult, as to which projects get resources and which projects don’t.
The combatants in this scene also nicely represent the different types of projects which must ultimately be pitted against one other. And at times, these differences give the perception it may not always seem like a fair fight. For example, we must decide how many resources we allocate for bugs or technical debt, and how much must be allocated the net new features and functionality. And, don’t forget about Aunty, or what we all know to be business drivers and executive buy in.
Needless to say, it is not an easy task and the outcomes are not always what is expected going in. Our goal, of course, is to crush the other guy and win. No, not really! Well, maybe.
Our goal is to use our available resources wisely and find the right balance between delivering new capabilities and enhancing existing features. Ultimately doing what we believe is the best outcome for the business at that moment in time. The beautiful part of this process is no two battles are the same, and at any moment Aunty or the market we are going after might change the combatants, add new weapons or tools to enhance, or completely change the battle and outcome.
There is a rapid transition happening with datacenter infrastructure. The days where compute, storage, and network are acquired, managed and operated independently are coming to an end. The trend towards converged infrastructure and now hyperconverged infrastructure is quickly blurring the lines between the old organization silos. This transition is being driven by IT’s need for higher agility to serve the business initiatives.
Companies like Cisco, EMC, NetApp, Nutanix and others are leading the charge with new technology and comprehensive architectures. These products are building the foundation of the future Software Defined Datacenter. However, this shift in technology must come with a corresponding shift in people and processes.
In the old world, a network team would have tools for network monitoring; the server team would have tools for server monitoring, and the storage team would have tools for storage monitoring. Each team could focus on their own silo, largely ignoring the others. This silo approach wasn’t efficient but it was adequate to in a silo-oriented world. Moving forward, this approach simply won’t work as the technology silos start to blend together.
Are your current tools ready to make this switch? Or will your agility be limited by outdated, last-generation monitoring solutions?
ScienceLogic is next-generation multi-technology, multi-cloud monitoring platform. It has out-of-the box support for Converged Infrastructure like FlexPod, Vblock, as well as deep integration with Nutanix and other Hyper-converged vendors in the space. Not only can you monitor network, storage, and compute. You can quickly view the relationships between the different operational elements. How is the storage for this VM performing? How is the network between VMs performing? How is the underlying infrastructure impacting performance of the business service?
At ScienceLogic, we are passionate about breaking down the silos in IT. Converged and hyperconverged infrastructure is a great example of how we do that.
In a hyperconverged world, can you live without hyper-converged monitoring?
Living in the modern digital world with a constantly growing number of tools, we are creating and managing more user accounts than ever. And with every new account comes another username and password combination that a user must now remember, which their security department has to manage. How does one minimize user frustration and effectively manage these accounts?
Many companies handle this by centralizing their identity information to a single identity provider (IdP), such as Active Directory (AD). This type of identity management may work well for applications with a single managed user base, but what if you’re an MSP and the users are all pulled from different enterprises? Should applications expect MSPs to manually create and maintain shadow accounts in their own corporate IdP or within the applications themselves every time they onboard customers? How do they maintain accurate account access when dealing with multiple customer IdPs, each with their own access levels?
With so many questions, maybe we should take a look at what features a MSP would need from an application from an authentication perspective.
Rapid Deployment: MSPs need to provide rapid access to the appropriate applications in order to accelerate customer success and increase value. An application that can quickly incorporate new user sets without heavy manual effort, removes part of the burden of on-boarding a new customer.
Flexibility: Flexibility to integrate with a variety of IdPs aids on-boarding by reducing the number of roadblocks from external user stores. By supporting integration with a range of user stores, MSPs can maintain control over the user base’s access to the applications while reducing deployment time.
Authorization Control: MSPs provide application services across a range of customers that may each require unique access within an application. MSPs, more than most, require applications that provide multi-tenancy and granular control to individualize access across their customer base.
Scalability: MSPs must maintain clear customer segmentation and by integrating their customer user stores they can provide a complex solution without a substantial impact on their own IT infrastructure for maintenance of the additional user base.
At ScienceLogic, we began 2016 with our v7.8 release, in which we have introduced a new level of control for our customers, giving them more choice in how they, and their users, authenticate.
With this release, administrators can now segment the ScienceLogic environment with policies to determine authentication type alignment across multiple LDAP/AD servers, CAC, or the new SAML SSO authentication option. Check back soon for more information!
As the US college basketball “March Madness” playoff season continues, it reminds me how the best teams outlast their opponents and grind out results. They go to extreme lengths to achieve their ultimate goal – the National Championship. However, this drive to succeed isn’t unique to college basketball.
A similar sentiment can be seen in the software monitoring business. Customers who have invested in monitoring platforms often lament over the lack of innovation and investment in feature enhancements, especially among the larger platform providers.
If I’ve learned anything from watching college basketball stars rise to the occasion, it’s that agility is the key to success – both on and off the court. At ScienceLogic, our commitment to agility allows us to stay ahead of future technologies, and anticipate the needs of our customers before our customers before they become needs.
Our latest software release, 7.8, is no exception. Here are some new features that were added with this release:
Authentication Update now supports multiple instances of Active Directory to help manage large or diverse user populations. Adding SAML support to enable Single-Sign-On.
Global Manager – MSPs or large enterprises are able to manage multiple ScienceLogic stacks, and to segment operations regionally to enable very large scale deployments in the millions of devices.
Tagging – Leverage VMware’s newest tagging features to group virtual machines together with other assets to monitor and track them as a group of assets or as part of a business service.
CBQoS Support – ScienceLogic now supports configurable Class-Based Quality of Service metrics for network devices and interfaces.
New PowerPacks for monitoring EMC VNX storage systems and Citrix XenServer/XenCenter
New Dashboard Forecasting Capability aims to help capacity planners. This feature provides predictive analysis based on historical regressions with a best-fit algorithm against a combination of models.
Updated Azure Services coverage and initial Office 365 beta support.
Customers can be fickle. Sometimes they want a service provided to them, sometimes they want to do it themselves. Sometimes you go to the buffet, and sometimes you want a waiter. How do organizations deal with this fluctuation of needs? At ScienceLogic, our partners are able to add value no matter what the need, and make their customers very happy.
Over the years, ScienceLogic has established a robust channel with our managed services customers. Our MSP partners use the ScienceLogic platform to provide network and hybrid cloud discovery, monitoring, automation, and guaranteed uptime. Providing these mission-critical services to our partners allows them to create new revenue streams by charging a premium for these services. Our partners have significantly grown their services businesses over the past few years, and many credit part of their growth to their partnership with ScienceLogic.
Hybrid cloud management has become a particularly hot topic in recent years. In fact, Gartner’s list of 2016 Key CIO Initiatives ranks cloud projects at number two. Our partners want to participate in this market which is growing 30-40% per year.
Our MSP customers have exclusive access to ScienceLogic’s JumpStart program, which is designed to teach MSPs how to monetize their network monitoring and management services using the ScienceLogic platform. JumpStart takes a unique approach to enabling the success of MSPs by addressing the product management requirements needed to successfully provide managed services. written tools, guidance and best practices that profitable MSP’s have learned over the years.
Now, ScienceLogic has introduced a brand new channel reseller program last month. This means that our partners can earn margins and professional services revenue on the resale of our products. Many, if not most, of our MSP partners have added reseller arms to their core businesses. The revenue from resale often dwarfs their managed service income stream.
Here is here customer choice comes into play.
Customers and partners want to be able to choose how they receive the benefits of EM7: buy it as a service or buy it as a product on a subscription agreement. Either way, the ScienceLogic platform services customers for a predictable revenue stream (3 to 10 years or more) that grows by as much as 40% a year.
Our big “ah-ha” moment was when we realized that most of our MSP’s offer both managed services and resale of products and services. With ScienceLogic, our partners, and their customers, can pick either way. Our managed services partners already have the technical expertise in their NOC/SOC to provide implementation, customization and periodic services to their customers.
Benefits for Value-added Resellers (VARs) are also available in this overlap. Many partners we talk with have been successfully reselling for years. However, they see the writing on the wall and profits on the worksheet. They want to start or expand their managed services. ScienceLogic’s platform is the ideal foundation to build a practice around. And JumpStart is the plan to drive profitability.
The VAR trend towards managed services is being pushed by companies like Microsoft and AWS, but is also driven by customer demand. Even many enterprise customers want to outsource their network management to refocus skilled people to more high-value activities.
In the macro sense, MSPs and VARs are merging together in their offerings. The most successful may be those who offer customer network and cloud services in the way customers want to consume them. Either as an MSP or VAR, offering ScienceLogic network and cloud discover, monitoring and management services is a truly valuable opportunity that shouldn’t be missed.
There is a massive transition towards the cloud underway. More and more enterprise organizations want to get out of the data centers business. Data centers are expensive to build, operate, maintain, and modernize.
For some organizations this just means that they will not build any new data centers while others are actively looking to close their existing data centers. But, the wholesale transfer of existing apps and services to a public cloud is not a trivial task. It cannot happen overnight. In fact, most organizations will end up in a hybrid state where some resources are in the public cloud and some infrastructure permanently stays under the control of IT.
An interesting architecture has emerged that can help ease this transition. Hosted data centers are becoming an increasingly important component of a migration plan towards a hybrid cloud.
Hosted data centers have some obvious advantages, including:
They offload many of the physical responsibilities of datacenter management such as power, HVAC, physical security, and resiliency
They allow for easy growth as additional capacity is required
They have high-speed network connections with most of the major network carriers
In addition, some of the most progressive hosting environments have an additional innovative service. Direct, private, high-speed connections to the public cloud providers. You can quickly provision a direct connection to Amazon, Azure, or other major cloud and SaaS providers. Imagine a 10G pipe from the heart of your corporate network to your public cloud resources that doesn’t travel over the internet.
In this architecture, the hosting environment becomes the logical core of the enterprise network. It provides high-speed reach back to on-premises infrastructure and is 1 hop away from the public cloud resources. You can incrementally move infrastructure to the hosted environment allowing you to shrink the footprint of the legacy data centers. Some of these assets may one day move into the public cloud or may permanently stay in the hosted environment. This architecture provides the flexibility and agility of a hybrid cloud architecture allowing enterprises to get out of the datacenter business.
Because ScienceLogic has a multi-technology, multi-cloud monitoring platform, we are ideally suited to be part of this hybrid architecture. We can monitor traditional network, compute, and storage resources and we have deep integrations with all the major cloud provider including Amazon Web Services, Microsoft Azure, IBM SoftLayer and VMware vCloud Air. From a single interface, you can monitor resources in a traditional datacenter, hosted environments, and the public cloud. Operational visibility is maintained throughout the migration under all of the different stages. You can move workloads to the best environment and not be limited by your tools.
Our blog’s authors aren’t just experts in their field, they’re also key contributors to our world-class monitoring platform. If you’d like to see how these topics play out in a real-world setting, please register for a free, no pressure demo: