As Jeremy mentioned in this blog post last week, one of the most challenging parts of a Product Manager’s job is prioritizing the work. Dave Link, ScienceLogic CEO, has often quipped, “There is no shortage of great ideas!” I have found this is indeed the case. Rarely a week goes by when someone doesn’t share a cool idea for an enhancement or new feature. Unfortunately, we are like everyone else in that we have a finite amount of resources to get work done.
In the Thunderdome, the product managers work together to best allocate the resources we have; all the while mentioning how we could always use more. We pit projects against one another based on metrics like total addressable markets, revenue and customer impact, level of efforts, and resource availability. We also must consider feedback from stakeholders like executives, engineering, support and our customers. Although we know it is impossible to give everyone what they want, we strive to find the right balance between building new capabilities, enhancing existing functionality, fixing bugs, refactoring and paying off technical debt.
For when it is all said and done, resource allocation is a zero sum game. When you assign resources to one project, those resources are no longer available to work on another project. During the PM Thunderdome we battle it out to find the right balance between all the competing priorities. All the while knowing that each time a project wins and resources are aligned we instantly have fewer resources remaining to work on everything else.
I often refer to our resources as “one pizza.” You can decide how big of slice to cut, how many slices to cut and how many slices each person gets, but you can’t change the fact that you have only one pizza. If you cut more slices then each slice gets a little smaller but you can give a slice to more people. If you cut fewer slices then each slice gets a little bigger, but there are fewer slices to go around. In reality, it is a little more complicated than slicing pizza because all the slices are not the same and all the people wanting pizza need different portions.
Some projects are larger than others and the resources all have different skill sets. But regardless of how you slice it, it is a fixed amount of pizza; one. And if you are thinking there might be some leftovers that you can save for another days, then you clearly have not had the pleasure of attending a ScienceLogic pizza party where there are never any leftovers!
One of the most challenging parts of product management is determining the order or priority of features and projects. Like anyone else, we have a fixed amount of resources that we must use wisely to get things done. In a recent internal discussion, the Product Management team was asked, “How do projects get prioritized and resourced?”
I am a big fan of MMA, so I joked that these decisions were made in “The Octagon.” And if this were true, it would make my life much simpler as I have a few pounds and inches over most of the other PMs. My colleague John, on the other hand, had an even better response.
PM Thunderdome: Two projects enter, one project leaves!
John and I are both big movie buffs, so I immediately knew the movie reference and fell in love with this idea. It is the perfect analogy for our process. For those readers who aren’t familiar with this reference, let me explain.
In the 1985 movie classic Mad Max Beyond Thunderdome, there is a legendary fight scene which occurs inside a giant metal cage call the Thunderdome.The Thunderdome is used to keep order in Bartertown by bringing disagreements to fast and definite end. The rules, well there are no rules, except that it is a fight to the death in which two combatants enter the arena, but only one leaves.
The scene pits the hero, Mad Max, against a larger stronger opponent, Blaster. The scene also features Tina Turner in her role as Aunty Entity, as she orchestrates the battle to regain control of “her” town.
Now, this is not to say that prioritizing the backlog is comparable to a barbaric death match between PMs in a post-apocalyptic wasteland. It is more an illustration that we cannot work on everything at once so we must make choices, which are sometimes difficult, as to which projects get resources and which projects don’t.
The combatants in this scene also nicely represent the different types of projects which must ultimately be pitted against one other. And at times, these differences give the perception it may not always seem like a fair fight. For example, we must decide how many resources we allocate for bugs or technical debt, and how much must be allocated the net new features and functionality. And, don’t forget about Aunty, or what we all know to be business drivers and executive buy in.
Needless to say, it is not an easy task and the outcomes are not always what is expected going in. Our goal, of course, is to crush the other guy and win. No, not really! Well, maybe.
Our goal is to use our available resources wisely and find the right balance between delivering new capabilities and enhancing existing features. Ultimately doing what we believe is the best outcome for the business at that moment in time. The beautiful part of this process is no two battles are the same, and at any moment Aunty or the market we are going after might change the combatants, add new weapons or tools to enhance, or completely change the battle and outcome.
There is a rapid transition happening with datacenter infrastructure. The days where compute, storage, and network are acquired, managed and operated independently are coming to an end. The trend towards converged infrastructure and now hyperconverged infrastructure is quickly blurring the lines between the old organization silos. This transition is being driven by IT’s need for higher agility to serve the business initiatives.
Companies like Cisco, EMC, NetApp, Nutanix and others are leading the charge with new technology and comprehensive architectures. These products are building the foundation of the future Software Defined Datacenter. However, this shift in technology must come with a corresponding shift in people and processes.
In the old world, a network team would have tools for network monitoring; the server team would have tools for server monitoring, and the storage team would have tools for storage monitoring. Each team could focus on their own silo, largely ignoring the others. This silo approach wasn’t efficient but it was adequate to in a silo-oriented world. Moving forward, this approach simply won’t work as the technology silos start to blend together.
Are your current tools ready to make this switch? Or will your agility be limited by outdated, last-generation monitoring solutions?
ScienceLogic is next-generation multi-technology, multi-cloud monitoring platform. It has out-of-the box support for Converged Infrastructure like FlexPod, Vblock, as well as deep integration with Nutanix and other Hyper-converged vendors in the space. Not only can you monitor network, storage, and compute. You can quickly view the relationships between the different operational elements. How is the storage for this VM performing? How is the network between VMs performing? How is the underlying infrastructure impacting performance of the business service?
At ScienceLogic, we are passionate about breaking down the silos in IT. Converged and hyperconverged infrastructure is a great example of how we do that.
In a hyperconverged world, can you live without hyper-converged monitoring?
Living in the modern digital world with a constantly growing number of tools, we are creating and managing more user accounts than ever. And with every new account comes another username and password combination that a user must now remember, which their security department has to manage. How does one minimize user frustration and effectively manage these accounts?
Many companies handle this by centralizing their identity information to a single identity provider (IdP), such as Active Directory (AD). This type of identity management may work well for applications with a single managed user base, but what if you’re an MSP and the users are all pulled from different enterprises? Should applications expect MSPs to manually create and maintain shadow accounts in their own corporate IdP or within the applications themselves every time they onboard customers? How do they maintain accurate account access when dealing with multiple customer IdPs, each with their own access levels?
With so many questions, maybe we should take a look at what features a MSP would need from an application from an authentication perspective.
Rapid Deployment: MSPs need to provide rapid access to the appropriate applications in order to accelerate customer success and increase value. An application that can quickly incorporate new user sets without heavy manual effort, removes part of the burden of on-boarding a new customer.
Flexibility: Flexibility to integrate with a variety of IdPs aids on-boarding by reducing the number of roadblocks from external user stores. By supporting integration with a range of user stores, MSPs can maintain control over the user base’s access to the applications while reducing deployment time.
Authorization Control: MSPs provide application services across a range of customers that may each require unique access within an application. MSPs, more than most, require applications that provide multi-tenancy and granular control to individualize access across their customer base.
Scalability: MSPs must maintain clear customer segmentation and by integrating their customer user stores they can provide a complex solution without a substantial impact on their own IT infrastructure for maintenance of the additional user base.
At ScienceLogic, we began 2016 with our v7.8 release, in which we have introduced a new level of control for our customers, giving them more choice in how they, and their users, authenticate.
With this release, administrators can now segment the ScienceLogic environment with policies to determine authentication type alignment across multiple LDAP/AD servers, CAC, or the new SAML SSO authentication option. Check back soon for more information!
As the US college basketball “March Madness” playoff season continues, it reminds me how the best teams outlast their opponents and grind out results. They go to extreme lengths to achieve their ultimate goal – the National Championship. However, this drive to succeed isn’t unique to college basketball.
A similar sentiment can be seen in the software monitoring business. Customers who have invested in monitoring platforms often lament over the lack of innovation and investment in feature enhancements, especially among the larger platform providers.
If I’ve learned anything from watching college basketball stars rise to the occasion, it’s that agility is the key to success – both on and off the court. At ScienceLogic, our commitment to agility allows us to stay ahead of future technologies, and anticipate the needs of our customers before our customers before they become needs.
Our latest software release, 7.8, is no exception. Here are some new features that were added with this release:
Authentication Update now supports multiple instances of Active Directory to help manage large or diverse user populations. Adding SAML support to enable Single-Sign-On.
Global Manager – MSPs or large enterprises are able to manage multiple ScienceLogic stacks, and to segment operations regionally to enable very large scale deployments in the millions of devices.
Tagging – Leverage VMware’s newest tagging features to group virtual machines together with other assets to monitor and track them as a group of assets or as part of a business service.
CBQoS Support – ScienceLogic now supports configurable Class-Based Quality of Service metrics for network devices and interfaces.
New PowerPacks for monitoring EMC VNX storage systems and Citrix XenServer/XenCenter
New Dashboard Forecasting Capability aims to help capacity planners. This feature provides predictive analysis based on historical regressions with a best-fit algorithm against a combination of models.
Updated Azure Services coverage and initial Office 365 beta support.
Customers can be fickle. Sometimes they want a service provided to them, sometimes they want to do it themselves. Sometimes you go to the buffet, and sometimes you want a waiter. How do organizations deal with this fluctuation of needs? At ScienceLogic, our partners are able to add value no matter what the need, and make their customers very happy.
Over the years, ScienceLogic has established a robust channel with our managed services customers. Our MSP partners use the ScienceLogic platform to provide network and hybrid cloud discovery, monitoring, automation, and guaranteed uptime. Providing these mission-critical services to our partners allows them to create new revenue streams by charging a premium for these services. Our partners have significantly grown their services businesses over the past few years, and many credit part of their growth to their partnership with ScienceLogic.
Hybrid cloud management has become a particularly hot topic in recent years. In fact, Gartner’s list of 2016 Key CIO Initiatives ranks cloud projects at number two. Our partners want to participate in this market which is growing 30-40% per year.
Our MSP customers have exclusive access to ScienceLogic’s JumpStart program, which is designed to teach MSPs how to monetize their network monitoring and management services using the ScienceLogic platform. JumpStart takes a unique approach to enabling the success of MSPs by addressing the product management requirements needed to successfully provide managed services. written tools, guidance and best practices that profitable MSP’s have learned over the years.
Now, ScienceLogic has introduced a brand new channel reseller program last month. This means that our partners can earn margins and professional services revenue on the resale of our products. Many, if not most, of our MSP partners have added reseller arms to their core businesses. The revenue from resale often dwarfs their managed service income stream.
Here is here customer choice comes into play.
Customers and partners want to be able to choose how they receive the benefits of EM7: buy it as a service or buy it as a product on a subscription agreement. Either way, the ScienceLogic platform services customers for a predictable revenue stream (3 to 10 years or more) that grows by as much as 40% a year.
Our big “ah-ha” moment was when we realized that most of our MSP’s offer both managed services and resale of products and services. With ScienceLogic, our partners, and their customers, can pick either way. Our managed services partners already have the technical expertise in their NOC/SOC to provide implementation, customization and periodic services to their customers.
Benefits for Value-added Resellers (VARs) are also available in this overlap. Many partners we talk with have been successfully reselling for years. However, they see the writing on the wall and profits on the worksheet. They want to start or expand their managed services. ScienceLogic’s platform is the ideal foundation to build a practice around. And JumpStart is the plan to drive profitability.
The VAR trend towards managed services is being pushed by companies like Microsoft and AWS, but is also driven by customer demand. Even many enterprise customers want to outsource their network management to refocus skilled people to more high-value activities.
In the macro sense, MSPs and VARs are merging together in their offerings. The most successful may be those who offer customer network and cloud services in the way customers want to consume them. Either as an MSP or VAR, offering ScienceLogic network and cloud discover, monitoring and management services is a truly valuable opportunity that shouldn’t be missed.
There is a massive transition towards the cloud underway. More and more enterprise organizations want to get out of the data centers business. Data centers are expensive to build, operate, maintain, and modernize.
For some organizations this just means that they will not build any new data centers while others are actively looking to close their existing data centers. But, the wholesale transfer of existing apps and services to a public cloud is not a trivial task. It cannot happen overnight. In fact, most organizations will end up in a hybrid state where some resources are in the public cloud and some infrastructure permanently stays under the control of IT.
An interesting architecture has emerged that can help ease this transition. Hosted data centers are becoming an increasingly important component of a migration plan towards a hybrid cloud.
Hosted data centers have some obvious advantages, including:
They offload many of the physical responsibilities of datacenter management such as power, HVAC, physical security, and resiliency
They allow for easy growth as additional capacity is required
They have high-speed network connections with most of the major network carriers
In addition, some of the most progressive hosting environments have an additional innovative service. Direct, private, high-speed connections to the public cloud providers. You can quickly provision a direct connection to Amazon, Azure, or other major cloud and SaaS providers. Imagine a 10G pipe from the heart of your corporate network to your public cloud resources that doesn’t travel over the internet.
In this architecture, the hosting environment becomes the logical core of the enterprise network. It provides high-speed reach back to on-premises infrastructure and is 1 hop away from the public cloud resources. You can incrementally move infrastructure to the hosted environment allowing you to shrink the footprint of the legacy data centers. Some of these assets may one day move into the public cloud or may permanently stay in the hosted environment. This architecture provides the flexibility and agility of a hybrid cloud architecture allowing enterprises to get out of the datacenter business.
Because ScienceLogic has a multi-technology, multi-cloud monitoring platform, we are ideally suited to be part of this hybrid architecture. We can monitor traditional network, compute, and storage resources and we have deep integrations with all the major cloud provider including Amazon Web Services, Microsoft Azure, IBM SoftLayer and VMware vCloud Air. From a single interface, you can monitor resources in a traditional datacenter, hosted environments, and the public cloud. Operational visibility is maintained throughout the migration under all of the different stages. You can move workloads to the best environment and not be limited by your tools.
When it comes to IT management tools, Industry Analysts highlight two different buying options: monolithic suites or disparate best-in-class tools. Over the last 20 years, the pendulum has swung back and forth between these two opposing approaches. However, both have major disadvantages.
If you want to drive a nail, get a hammer (don’t use your shoe). This philosophy is at the heart of the best-in-class approach. Each team is different and has different requirements so let them identify and buy the tool that best fits their need. Forcing teams to use mediocre tools hurts their productivity and effectiveness.
However, allowing every team to have its own tool can result in a very fractured IT management environment. No one can see the whole picture. The result? Finger-pointing, war rooms, and slow problem resolution. Vertically organized silos are misaligned to the business services that span across the technology domains. It can also be expensive to maintain all these independent tools and keep everyone fully trained.
Using this approach, it’s not unusual for enterprises to have anywhere from 10-40 different management tools.
Single vendor suites claim the advantage of consistency. A single vendor designs and builds the suite, creates integrated workflows and a common user interface. A single suite also results in easier maintenance and a lower training burden. You have “one throat to choke” when there is a problem and you get everything you need with a single, easy to understand quote.
Have you stopped laughing yet?
In reality, this is not what happens. Most mega-vendors (the Big-4) have cobbled their suites together through multiple acquisitions. Often, the integration is only surface deep. These vendors have spent years trying to integrate user interfaces, database structures, and workflows. Their commitment to last-year’s hot acquisition quickly cools as they move onto the next big thing. The promised integration frequently requires large consulting engagements and even though you bought into the whole suite, you still need to buy an additional module to get the latest feature. And then you need to upgrade – how long will that take?
It’s expensive, confusing, and you wind up with mediocre tools that are only slightly integrated. The Big-4 is where good products go to die.
A Better Approach
At ScienceLogic, we believe there is a third (and better) approach. Your teams should be able to access modern, best-in-class specialty tools of their choosing without the resulting operational chaos of independent domain silos. How do you get the advantages of a best-in-class approach with the operational efficiency of a suite approach?
What you need is a flexible, hybrid IT management platform that sits at the heart of your operational environment. An easily configurable and extensible platform that:
Supports all of the major technologies “out-of-the-box” (network, system, storage, cloud, virtualization, SDx, UC, etc.)
Provides role-based dashboards of your IT services and underlying infrastructure, so different teams get specialized views
Provides a wide set of delivered integrations with common best in class management tools. Teams get to use the best tools available
Provides a powerful and flexible open interface to add support for any additional specialty tools that your teams require
Helps you automate common administrative actions and operational workflows with adjacent management tools to streamline your operational processes
Allows you to adapt quickly to new technology, tools, and requirements
Can you afford to continue struggling with your chosen “best-of-class” or “suite” approach? Or do you want to learn more about how ScienceLogic’s hybrid IT platform approach is helping hundreds of organizations overcome the challenges of these diametrically opposed approaches.
It’s no surprise to anyone working in IT that 2015 was marked by significant strides and changes in how public cloud and hybrid cloud landscape. What does 2016 have in store? Check out our predictions below.
1. The Electorate Have Spoken – Azure and Office 365 for the People
Microsoft Azure will reach critical mass, gaining enterprise traction and market share from IaaS cloud leader Amazon Web Services (AWS), with the introduction of a slew of new services, to
compete with the innovation that AWS has displayed. We’re already seeing the shift in mindshare from a number of our managed service provider partners, as well as enterprises. Closely associated with the launch of packaged services around Azure platform, is a rapid uptick to Office 365. For CSPs fearing the outcome of the old ASP era, and being disintermediated in the SaaS boom, Office 365 offers a new opportunity.
In turn, Microsoft is adopting a customer lifetime value approach with recurring revenue over time vs relying on one time license deals, and with good reason. SaaS is the fastest growing spend element already by enterprise IT departments, according to the industry analysts. And that, in turn, is driving a need for greater agility in IT tools since the infrastructure is not owned internally, and change in mentality as IT become service orchestrators.
2. Bimodal Gets a Common Interface
Dual track approaches like Bimodal IT will become more and more prevalent as organizations try to become more agile. This will unleash a new wave of innovation within many companies. Not all deployments will be successful though. Organization that try to simply bolt on a high velocity, experimental application track will struggle if it isn’t part of a larger organization transformation. Bimodal approaches require cultural, structural, and technical evolution. In particular, a tools transformation is required to ensure that the new high velocity applications don’t simply become new independent silos for operations to manage. Bimodal organizations will need to build a common interface to manage both types of applications.
3. The Year of Software Defined Everything (SDx).
This is the year that Cisco ACI and VMware NSX will take off. These technologies will come out of the test lab and will be deployed across many data centers. Cisco will take the early lead here based on the critical mass of Nexus switches deployed as part of natural refresh cycle. Cisco’s will also leverage its massive enterprise and channel presence to push ACI. SDx creates a massive opportunity to mat IT more agile but it will require a simultaneous reevaluation of operation tools. Most legacy tools are blind to the new paradigms of ACI and NSX. Modern tools will be required to take maximum advantages of SDx.
4. The Cloud is Dead – Long Live the Federated Cloud
Relying on a single cloud is no longer the norm. More and more enterprises are acknowledging a mish-mash of SaaS, IaaS, PaaS, legacy infrastructure, private cloud, hosted and colocated execution environments. Amazon Web Services, Azure and SoftLayer are quickly becoming the default enterprise multicloud trifecta.
The greatest need we see is the management of these multicloud environments into a single federated pane of glass. Although much smaller than the two leaders in the IaaS space, we envision SoftLayer achieving critical mass as IBM invests heavily in the platform this year. We are predicting continued consolidation in the cloud space in 2016, as another significant top six cloud player exits the business (like HP and AT&T did in 2015) . We do not envision much upstaging Google or VMware in this area, given the immaturity of their APIs and lack of traction in the space.
5. Cloud Wars – The Empire Fizzles
ScienceLogic is naming 2016 as the “Year of the API.” It has become extremely evident that the way forward in the world of interoperability and integrations is through APIs. For those vendors that have not been API centric in their approach to tools creation, i.e. the Big 4 monitoring framework providers, it will be a race to the bottom, as most traditional players battle to keep up with rapid changes in cloud APIs. For those with inadequate APIs, it will certainly be a case of death by 1,000 APIs as the integrations will become a nightmare in services work – something becoming highly unappealing in the era of SaaS and agility. Winners and losers will be defined by the pace of API adoption, and ability to deploy easily in an otherwise complex ecosystem of technologies.
2016 is already shaping up to be a very busy year for the hybrid cloud industry. Do you have any industry predictions for the rest of the year? Tell us in the comments below!
This is a wild time to be in Enterprise IT. Business revenue is becoming more and more dependent on IT. Technology options are proliferating. The Business Units (BUs) want to move faster but often feel limited by internal technology. IT needs to adapt to this new reality. Can we borrow any lessons to accelerate this change?
Many people think of Service Providers and Enterprises IT as very different models. Historically, this was true. However, in a world that is moving towards “Everything as a Service,” Enterprise IT can benefit from some of the lessons that MSPs have learned.
Maybe Enterprise IT should start to think of itself as a true Service Provider.
Why? Consider the following:
Your users are actuallycustomers – MSPs need to fight for each and every client. They need to continually reevaluate their offerings and services because they know that their clients have other options.
Realize that you now face competition– Historically, Enterprise IT held a monopoly. The business was limited to the technology that IT provided and the pace that IT was willing to innovate. With cloud services (IaaS, SaaS and PaaS) available, the BUs now have many alternatives. Shadow IT is an outgrowth of this phenomenon.
How do Enterprises start thinking like an MSP?
Build a compelling portfolio of services – Rather than fight Shadow IT, embrace the concept. Identify the forces that cause users to look outside. Is it technology? Is it speed? IS it simplicity? Is it innovation? Wrap new technologies into a service that you can offer to your clients but do it better. Shadow IT with governance/compliance/management/security …
Be multi-tenant – There isn’t a one-size-fits-all model here. With the emergence of Bimodal IT, treat each application and BU as a different customer. Each client has different needs so you should provide custom dashboards and custom reports to each. Continuously demonstrate your value. Leverage automation and orchestration to minimize the overhead of maintaining many clients.
Drive innovation – Don’t wait for your users to ask for new technology and services (by that time it will be to late). As the technologist, you need to anticipate the demand. Become an entrepreneur and evangelist. Build your business.
Market your services – Don’t rely on a “build it and they will come” attitude. You are competing against huge marketing budgets. You need to advertise and sell the new internal services. Yes – you have a home team advantage but you still need to demonstrate your unique advantage.
Become a trusted advisor – As business revenue becomes more and more dependent on technology, there should be a technologist at the table. This evolution is an ideal opportunity for IT to become the primary source of technical innovation. IT can use this transformation be viewed as part of the revenue stream rather than just a cost center.
With a long history of working with both service providers and Enterprises, ScienceLogic has a developed a platform that is well suited to the new reality of “Enterprise as a Service.”
To learn more about our industry-leading monitoring solution, check out this short video.
Our blog’s authors aren’t just experts in their field, they’re also key contributors to our world-class monitoring platform. If you’d like to see how these topics play out in a real-world setting, please register for a free, no pressure demo: