Over the the next few months, ScienceLogic will be visiting major cities across the United States to discuss hybrid IT and the changing landscape of monitoring. We were excited to kick off this series this past Tuesday, March 24th, in Arlington, VA!
Our Director of Product Marketing, Peter Luff, presented during our first event, and he recapped his experience for us below. Take it away, Peter!
Our joint presentation with Amazon Web Services on hybrid IT was a great success – and the view was pretty good too! I have given restaurant and rooftop presentations before, all related to different generations of technology in previous lives: client-server computing, frame relay networking, even Ethernet and Token Ring in the old days (maybe I am getting old!). But with each successive technology cycle it becomes clear that the established vendor world order usually shifts when there is disruption. And there is nothing more disruptive today than public cloud!
With public cloud now going mainstream, new monitoring technologies are needed to cope with it and make it manageable for mainstream adoption, giving rise to a whole new crew of vendors like ScienceLogic.
Our unique level of hybrid IT visibility is opening the eyes of users to new ways to view public cloud, together with on-premise assets, in a single view. So much so, that we are now being invited to meet large enterprises who not only need to get to the cloud faster but also must manage the resulting hybrid infrastructure. The traditional monitoring players are clearly a day late and a dollar short on hybrid visibility – and the new order is developing, with ScienceLogic on the top of the pile – and the view from the top is pretty good!
Interested in attending one of our regional events? Our list of upcoming Lunch Seminars is below for you to view! Click on the link to register and secure your spot:
We hope to see you there! Learn more about our upcoming events and register to attend on our website: www.sciencelogic.com/company/events
Tagged with: hybrid cloud
Gee Rittenhouse kicked off the first Technology keynote speaker at Cisco Live Melbourne 2015. The title of his presentation was “Transform Your Business in the Digital Age.” My take on this, as with everything these days in the tech world, is that it’s all about the cloud! I remember my early days of working at Cisco when the picture of the cloud was used to represent the network. At that time it was very clear what “the cloud” meant. Today, the cloud is a little more all encompassing, and as Gee stated during his presentation: “Underlying everything is the cloud.”
In summary, Gee indicates that the most important technology trend is the cloud and mobility. That these technology transformations will impact your business when it becomes simple. Cisco is in the process of making this happen through their Cloud Platform and InterCloud.
Gee began his keynote by talking about the technology and trends with respect to the cloud, internet of things, and device mobility. He indicated that the hard part of these technology transformations is being able to determine when these will affect your business. Cisco believes that happens when technology becomes simple to use, and is working toward this transformation by making the technologies simple to use and consume.
Gee’s keynote wouldn’t be complete without providing some interesting facts about the internet. My favorites were the following:
- Two thirds of mobile traffic is now video
- 50 billion internet connected objects
- 3 million YouTube videos viewed every 60 seconds
- 77 billion apps have been downloaded
Gee mentioned that, a few years ago, the key concerns from a CIO centered on technology, cloud, security, and mobility. However, today with IT as a service, the top three things are revenue growth, innovation, and cost.
Cisco is addressing fast innovation with the concept of “Fast IT.” Major components of this include mobility and internet of things. How these things are connected is where SDN and NFV come into play.
How does the cloud transform your business? Cisco considers the cloud a holistic system through three principles:
- Make it simple – Converge the infrastructure. (Enterprise likes to buy it in prepackaged chunks: servers, sw, catalogs, etc.) Keep it simple so enterprises can focus on their business and not clouds.
- Make it easy to consume – Cisco will sell it as HW, as SW, as a service, as a managed service, etc.
- Create as a platform – Almost a year ago Cisco announced InterCloud. Now Cisco is building a cloud platform that will be part of InterCloud. Cisco will put their apps on the cloud in a catalog format, making them easy to consume.
Cisco Cloud Platform
Cisco Cloud Platform constitutes three key components:
- Apps exposed via a catalog
The platform architecture differs between the Enterprise and the Service Provider. The enterprise, for example, utilizes a converged infrastructure, and Cisco has bundles around this converged infrastructure all based on Cisco UCS. Cisco ONE Software Suites defines three categories of bundles: Data Center, WAN, and Access. All bundles include Cisco Application Policy Infrastructure Controller (APIC), which is a key component of Cisco’s Application Centric Infrastructure (ACI) SDN offering. The service providers don’t generally buy converged infrastructure since they buy the components and integrate them themselves.
Cisco provides solutions on top of that infrastructure like Mobile IQ, Cloud DVR, Virtual Managed Services, etc. The bundle is still based on UCS. Additionally, the Service Provider stack is more complex than the Enterprise stack, with the addition of an orchestration layer that helps with the chaining of the applications, the creation of new services, etc.
Cisco has 40 thousand enterprise companies running UCS, in addition to service provider customers. InterCloud brings these two markets together. The Cisco cloud, partner clouds, public clouds, and enterprise clouds are all tied together in InterCloud, which forms a marketplace.
It’s not just an app sitting on a datacenter, but an app that can be placed anywhere based on geography or any other criteria. This provides not just Cisco apps, but apps from partners, ISV apps, etc. The platform used in InterCloud is not exposed to the customer in any way. APIC is what enables the movement of these apps across the network. APIC provides a consistent policy, regardless of where that app is deployed.
To make this real, Cisco needed an open source platform. In this case, OpenStack was chosen, as Cisco is the number one contributor to the Neutron component (Network Component) of OpenStack. Cisco thought it was critical to base this on an open source platform to encourage and enable adoption of this platform.
InterCloud consists of four types of clouds:
- Partner Clouds – Alliance partners like Telstra has the exact same stack as Cisco.
- Enterprise Private Clouds – Cisco has 40K UCS customers today.
- Public Clouds – These are not going away and Cisco already knows how to move workloads to the public cloud.
- Cisco Cloud Services and Applications – Cisco is putting its own applications on top (virtualized routers, load balancers, video, UC, etc.).
Tying these together provides a very rich experience that enables you to use many applications with your choice of where you want these applications to run.
Gee demoed Cisco Marketplace by showing how easy it was for an enterprise to go to the Cisco Marketplace and move an application (in this case Project Squared) from the Cisco Marketplace to the enterprise catalog, making that new application consumable by any member of the enterprise company. Cisco’s key goal here was to make it simple, since as mentioned at the opening of this session, simplicity is what drives adoption of new transformations.
In conclusion, Gee stated that Cisco’s Strategy is about (1) the converged infrastructure, (2) the ability to easily consume these apps and resources in any way, and (3) provide a platform that is open to everyone. Cisco can build on it; Cisco partners and Cisco customers can build on it. This will enable the business transformation into the digital world.
Tagged with: cloud computing
, video conferencing
Bigger, better and clamoring for more cloud; that is the best way I can describe this year’s Cloud Expo Europe, held in London. This time the organizers thought they’d give the data center facilities folks a taste of the cloud by having the two shows exist side by side in a large room. To some degree this tactic worked well. Large enterprises, CIOs, and datacenter operators are looking to be more involved in this cloud thing, and Cloud Expo Europe provided ample opportunities for learning and getting started.
A few common threads emerged throughout dozens of presentations over the two days. A few note worthy trends were:
- Illustrating the need for cloud to become the accepted norm in for IT strategic decision using success stories.
- How to influence decision makers who are hung up on making the move to a cloud environment by becoming a trusted advisor throughout the process.
On the receiving end of the hoopla were the enterprise, CIOs, and datacenter operators making their queries openly known:
- What do I need to make my cloud migration happen? A new orchestration tool of some kind?
- How do I calculate my ROI?
- How can I figure out what expected operational performance parameters should be?
Moreover, let’s talk about my applications and business related workloads, and how they map into interdependent infrastructure. I prefer AWS! I prefer Azure! And for MSP’s and SI’s making the shift to Managed services, there’s no doubt that the chasm has been crossed. Making AWS or Azure a core part of the solution is no longer taboo, a “maybe” – but a highly acceptable model.
The ScienceLogic booth was alive with discussion throughout Cloud Expo Europe! Not just around CloudMapper and interdependency mapping for hybrid environments, but also around our role in the cloud migration lifecycle. This lifecycle includes cloud migration reporting, fed by IT workload discovery and mapping. These are becoming critical components of the front-end for the migration process.
By this time next year, I anticipate much more discussion around the automation of such a migration process. It’s clear that now is the time to stake claim in the new enterprise transformation process and closely associated cloud migration ecosystem.
Tagged with: cloud
, cloud events
, Cloud Expo Europe
Welcome back to our third and final installment of Migrating to the Public Cloud. If you’re just joining us, feel free to catch up on what you’ve missed. So far I’ve covered why and what to migrate to the cloud, as well as how and when to migrate. To wrap up this series I will be focusing on which cloud provider to choose, and where you can go for more information.
Let’s get started!
Who to Choose as your Cloud Provider
IaaS, PaaS, SaaS, MaaS, TaaS, and FaaS. Sure, I made a few of those up, but what’s the difference between them? And what do these all have to do with AWS? Further, how do you decide which cloud provider fits your needs?
The cloud landscape is changing rapidly, as are the types of clouds being provided. For most enterprises you’ll be focusing on IaaS (Infrastructure as a Service) like AWS, SaaS (Software as a Service) like Salesforce.com, and PaaS (Platform as a Service) like EngineYard. Depending on your application architecture and your users you’ll end up using one or more of these services.
- Location, Location, Location.
Consider where your users reside and also where you plan to do business. Ideally your cloud provider will offer a semi-local option so neither latency nor data access/storage (data sovereignty laws) are impacted.
- Consider the Services Offered
One of the more amazing things about public cloud providers is the amount of services they offer. Amazon Web Services alone offers more than 30 different services across their multiple data centers and regions. Microsoft Azure, similarly, offers more than 30 different services to ensure you get the most out of your cloud investment.
- Amazon Web Services
The reality is Amazon Web Services is the 800-pound gorilla in the market, which is why I am giving them a bullet of their own. They’ve been offering public cloud services for quite some time now, and have the process down to a science. They even offer a fair amount of free access so you can get started without paying a penny. They’ve covered just about every location you need and, as mentioned, offer an amazing number of services. While I would caution it is important to look at a few different providers in your selection process, I would also suggest you strongly consider including AWS in that list.
Where Can You Get More Information?
Where do you go to get more information? Below is a curated list that I found particularly helpful in my understanding of the migration journey:
There you have it, folks! You are now fully-prepared to begin your cloud migration journey. My hope is that this series can serve as a reference guide as you get started on this process.
For those who are mourning the end of this series, I have great news! Over the next six weeks ScienceLogic will be producing in-depth documents on cloud migration, and we’ll be hosting a webinar too.
Click here to register for our “Taking the Mystery out of Public Cloud Migration” webinar on March 17!
I encourage you to follow ScienceLogic’s developments in this area as we will be delivering a number of new free tools for you and your team to help you in your journey to the public cloud.
Questions? Comments? Leave them below. Look forward to seeing you up in the clouds!
Tagged with: AWS
, cloud computing
, hybrid cloud
Imagine you’re an enterprise CIO trying to figure this cloud thing out. What are the things that you’d like to better understand and need assurances on? Likely, it would start with the following:
- What is the potential cost of your move?
- What should be moved?
- Why is connecting into a third party data center better than ad hoc connections from your IT shop via your local ISP?
Finally, how do you gain some measure of security and control over all those things operating in the cloud? Is there a way to validate that they actually do belong there? How will you monitor and visualize the resulting hybrid IT infrastructure for troubleshooting and planning?
Now, imagine you’re the world’s biggest datacenter operator: Equinix. You’re housing thousands of the world’s largest enterprise customers. Many of which are struggling with a coherent way in which to answer all of these, and many other, questions about the cloud.
That’s the challenge that our partner, Equinix, accepted when it chose to augment its premier datacenter operations, and take on the role of cloud facilitator for those large enterprises. So, what did Equinix do differently? Quite a few things.
Let’s start with the Equinix Performance Hub. These performance hub solutions are a network extension node for enterprises, with connectivity to the world’s largest telecom carriers. That is in addition to a Direct Access Program for service providers to offer cloud services; a series of Solution Validation Centers for SA’s to propose ideal reference architectures; the exposure of a public facing API for programmatic access to the multi-cloud; and most recently the acquisition of Nimbo (Professional services for Hybrid IT architectures).
Most importantly was the creation of the Equinix Cloud Exchange. This creates a seamless connection point and cloud ecosystem for enterprises to access the multi-cloud, multi-network giants.
If you were going to pick the perfect partners to invite to that cloud ecosystem, who would you invite? You’d probably start with the world’s most prominent cloud providers: Amazon? Check. Microsoft Azure? Check. Salesforce.com? Check. Softlayer? Check. Google? Check. Cisco Intercloud? Check. But wait! Is just giving the enterprise the option of all of these clouds through a single physical connection (the Cloud Exchange) enough to help the migration cycle?
You’d probably want to include a series of partners that were highly regarded and trusted by those enterprises. Perhaps some of the ones that are being leveraged by those cloud giants to go to market? Someone similar to a Datapipe, a T-Systems, or numerous other MSP partners? Check, check, check.
That’s great! But the enterprise still needs a way to create and execute a plan to get to the cloud. They need a series of tools to discover their IT assets and perhaps the state, health and performance of those IT assets, right? These tools would need to do a variety of things, such as:
- Help uncover what possible workloads belong in the cloud via migration reports.
- Easily ingest live APM data, and possibly business policies from their existing enterprise Service Management tool
- Integrate with a migration tool to make the process easy.
- Support the ongoing operational task of monitoring and managing the resulting distributed hybrid infrastructure – in the Cloud and on-premise.
What if that tool could follow the workload into the cloud? Or perhaps more than one of those clouds, as well as the interdependent assets that remained on-premises, and could validate the architecture, real utilization and performance of that Hybrid IT environment? Well, Equinix found that tool and it is ScienceLogic.
Where would you place such a tool, to be readily available for your go to market partners? For MSPs, SIs and Solutions Architects to leverage as and when needed? Most likely in a location that is simple to access, like the Equinix Cloud Exchange. That’s exactly what Equinix did by selecting and deploying ScienceLogic as the first and favored monitoring and management tool on their Cloud Exchange.
We’re thrilled to have our first Cloud Exchange deployment be in Ashburn, Virginia, literally an arm’s length from the giants of the cloud world, with the second one recently deployed at the AWS facility in Frankfurt, Germany at the end of February. But it doesn’t end there!
Last week we announced our new collaboration with Equinix, aiming to simplify and ease enterprise migration to the cloud. Gaining access to multiple cloud providers via Equinix Cloud Exchange and leveraging ScienceLogic’s integrated monitoring solution, enterprises can achieve improved performance, security, management and cost-control of their entire IT infrastructure.
Read our full press release on our partnership with Equinix here.
Questions? Comments? Leave them below and we’ll be sure to get back to you!
Tagged with: Equinix
, hybrid cloud
Welcome back to our Public Cloud Migration series! I’ve been on the road quite a bit and busy with a number of customers, but I’ve put a few CPU cycles together and worked on this blog. For those unfamiliar with this series, I’m highlighting what you should consider when you look to migrate to the public cloud, focusing on the 5W’s and How.
In case you missed it, you can see part one of this series covering the why and what of cloud migration here. Today’s post will focus on how and when to migrate to the public cloud.
My goal is to introduce key concepts for a successful migration to AWS (or any other public cloud environment, for that matter). I will dive into much greater detail in our upcoming “Taking the Mystery Out of Public Cloud Adoption” webinar and provide even more detailed information in a white paper we are producing subsequent to the webinar. So, treat these as bite sized morsels to get you ready for the main course!
How To Migrate?
Ok now you know what to migrate, but how do we actually go about doing this? When does a workload move into production? How do I validate that it’s going OK?
If you’re building your applications from the ground up and functioning in a Dev-Ops fashion you need to think about building an application using the cloud. Your app should be smart enough to scale compute resources up and down based on demand (which is where public cloud powered autoscaling fits in).
- Migration Process
A step-by-step process to build, test, and move into production needs to be followed vigorously for a successful rollout. You should be moving applications based on priority and then divide into chunks. For example you may first start with moving a front-end server to the cloud, test, and then move to production in a hybrid cloud while the backend may reside on premises.
As you move, test, and move, you need to ensure you have proper visibility into the application from where it began in your datacenter, to where it ultimately resides in the cloud, and during the transition stage as it moves piecemeal to the cloud. Ideally you should be using the same methods and tools for this visibility to provide an accurate comparison
When To Migrate?
You have an app, a plan, and now it’s time to migrate! Or is it? When’s the right time to move your application into the cloud?
- Return on Investment
Over the past decade a number of companies have made significant capital expenditures in data centers, servers, networking, storage, and virtualization technologies. These investments may still have a better ROI over the length of a project than moving everything to the cloud. It often makes sense to move your newest and oldest applications first.
- Learning Cloud
Before making the jump to cloud it’s vital to make sure your team understands the limitations, strengths, and weakness of various providers. Building knowledge in the cloud, however, is easier than it has historically been in IT because of the well established communities and free resources available on the web. Additionally the largest providers have started offering free training material as well as certifications to make sure your team is ready for the cloud.
- Third Party Services
Most of the large service providers as well as a number of third party consulting companies can help you migrate individual or groups of applications to the cloud. Other companies, such as ScienceLogic, provide monitoring and management services around cloud products and applications.
Ok, how’s that for a quick Monday morning touch on migrating to the public cloud? If you take only a few things from this post, I’m hoping you gathered:
- The upcoming webinar your’s truly is doing is a must attend (wink, wink, nudge, nudge).
- When you are looking at “how” to migrate, it’s all about process, process, process, visibility, visibility, and visibility.
- For “when” focus on that ROI and make sure your team is ready to manage apps in the public cloud (be sure they are fully trained, etc.).
My next post will focus on a final 2 questions: Who helps you migrate? And where to migrate your applications to?
Look forward to seeing you next Monday!
Tagged with: cloud computing
, public cloud
In the quest for cloud dominance in today’s crowded market, there is one key attribute (beyond top line revenue) that defines the current leaders: a healthy and vibrant partner ecosystem. The days of monolithic technology stacks that all sit under a single brand are over.
Enterprise customers demand diversity, choice, flexibility, options, and variety to help meet their expanding thirst for hybrid IT solutions. IDC recently put Hybrid Cloud Architectures as their top FutureScape CIO decision imperative for cloud (Source: IDC FutureScape 2014). Hybrid cloud is made up of a mix of on-prem compute, network, storage resources and off-prem cloud services combined with a myriad of management and monitoring technologies that bring it all together.
So, how do you get to the value of cloud faster? Many times that depends on who your trusted advisor is, such as your systems integrator, pro-services consultant, internal IT advisor or perhaps the LOB owner with a vendor preference. In either case, it’s likely going to be multiple technology providers to help achieve project success.
For the past several years, our team at ScienceLogic has been building strategic partnerships with world class technology providers focused on hybrid IT delivery. Our goal has been to provide our customers with choice but also a recommendation of how to go faster. Fortunately we’re not alone in that approach.
One of our strategic focuses has been partnering with Amazon Web Services, the undisputed leader in off-prem cloud services. During our recent participation in the AWS Sales Kick Off in Seattle, it was evident how important ecosystem partners play in the overall success to converting more customers to off-prem cloud. There are different categories of partners that align with different aspects of moving to the cloud during each phase in that journey. In the ISV category of technology partners, you need to demonstrate how different aspects of your solution plays into the overall lifecycle of migrating a workload to the cloud. For the areas that are either not part our core product or not a focus area (such as Application Performance Management, Security, or Provisioning), we look to partner with other ISV’s to create to a more unified solution.
ScienceLogic has built robust integrations between strategic partners that help make up this mature ecosystem so that ultimately we short cut possible questions or concerns and deliver move value as an ISV team. We believe the technology world is becoming an API economy. API’s provide the necessary information and data to drive visibility, which drives the dynamic changes required in the Software Defined Data Center or cloud. Fortunately, our hybrid IT monitoring platform was architected with these new API’s in mind. That gives us the freedom to orient our focus to best-of-breed partners to complete our ecosystem.
2015 is going to be the year of hybrid IT but it’s also going to be the year of the ecosystem as well. Those that can partner with the best will likely rise to the top.
Tagged with: AWS
, enterprise cloud
, hybrid cloud
, Hybrid IT
After years of cloud prediction we’re finally at a point of cloud adoption. The efficiencies are real, the technology has matured, and it’s becoming a business requirement. Now comes the hard part: execution.
During this series of blog posts I’m going to help you try to plan your execution strategy for cloud adoption following the simple 5W’s and How. This post focuses on the “why” and the “what” of public cloud migration.
My high-level goal is to introduce key concepts for a successful migration to Amazon Web Services, or AWS. I’m going to dive into much greater detail in our upcoming “Taking the Mystery Out of Public Cloud Migration” webinar and provide even more detailed information in a white paper we are producing subsequent to the webinar. So, treat these posts as bite sized morsels to get you ready for the main course!
Let’s start off with the “why” of cloud migration. You could use Google and come up with dozens of lists, but they all center on a few key topics:
We could have titled this something different, like agility or a variety of other interchangeable buzzwords, but the premise is the same: lack of barriers to starting new applications and services. Public cloud providers, and AWS in particular, have been used by startup businesses from the likes of Box, Dropbox, and Netflix to get up and running quickly when building a new platform from scratch. However, these same benefits also apply to migrating existing technologies and deploying new technologies for legacy companies.
The startup world quickly found the costs of operating data centers and hosting servers to be major hurdle for starting a new business. Public cloud providers allow people to start something with minimal investment, and as businesses evolve updating processes and technologies, the cloud provides a cheaper alternative to traditional on premises and off premises data center solutions.
The public cloud is everywhere now and that solves many challenges in building mobile friendly applications. For example, AWS’s Dynamic Auto Scaling allows for an application to grow as the user base grows. Additionally you could use services like CloudFront or distribute your application across the various Regions and Availability Zones that AWS provides, reducing latency and protecting client data for compliance purposes.
What to Migrate?
Here’s the big one, for the purposes of this exercise we’ll remove SAAS providers (Office 365, Gmail, Salesforce, etc.) and focus on your apps and the feasibility of running them in a public, private, or most probably a hybrid cloud. Things to consider when making this move:
Sensitivity of Data / Data Warehousing Requirements
Does your data need to reside in one safe place? What about government regulations? As you migrate to public cloud environments these are some of the questions you will need to consider.
How much compute do you need to run your application? You need to consider both the amount of compute as well the platform on which it’s built, and most importantly how the application is used. As an example, if the application is heavy in data and storage it may make more sense to leave onsite rather than offload it to the public cloud.
Visibility into performance
To determine the aforementioned you need detailed statistics and visibility into how your app is performing.
The cloud, while powerful, isn’t always super simple. With it come a host of new challenges that your operations staff may not have seen before, for example, “noisy neighbor” (when a single application is consuming too many shared resources).
Now you know key information on why and what you should migrate to the public cloud. Check back here next Monday to learn how and when to migrate.
Questions? Comments? Leave us a note below and we’ll get back to you!
Tagged with: cloud monitoring
, hybrid cloud
, Network Monitoring
, public cloud
Last week saw the inaugural Customer Symposium for ScienceLogic in Australia, or more technically for the Asia-Pac region. Home of the Australian Open, a good cup of coffee, and the launching site for the ever-popular Cricket World Cup that kicked off on Saturday against perennial cricketing enemy—England. The ScienceLogic team descended on Melbourne, ready to absorb the local culture, which was voluntarily thrust upon us in short order.
With the world’s largest cricket pitch, the famed Melbourne Cricket Ground (MCG), as our backdrop, we kicked off the Symposium under the banner of “connecting the cloud.” In a similar vain to the theme percolating in North America and Europe, there was a great deal of interest in the migration of customers to the cloud. As well as in managing AWS, Azure and other public cloud environments as the Trusted Advisor into the Enterprise.
In a perfect culmination to the day, two customers spoke to the crowd about their experiences with ScienceLogic. Here is an overview of their stories:
First up, VMtech talked about the initial trouble of finding an appropriate vendor to solve the expanding problem of virtual cloud sprawl and hybrid IT. In addition, finding a vendor with genuine multi-tenancy and multi-functionality embedded in a single platform, with only one clear leader emerging in that quadrant.
Secondly, a MSP focused on the onboarding of enterprises to the AWS cloud, making money on both the up-front consultation, and ongoing management of those assets on behalf of customers. As with their North American peers, enterprises in Australia are in search of a trusted advisor to help accelerate their migration to the public clouds, via discovery, migration reporting, and validation of the performance and health of assets in the cloud. All of the things that have made ScienceLogic emerge as the preferred partner to AWS, Equinix and others at the front-end (i.e discovery) and back-end (i.e run/operate) of the migration cycle.
We also saw another major theme emerge: the need to correlate more data, and understand best practices in monitoring and managing apps and services overlaying in hybrid IT environments.
The idea that a platform can take data, events, and alerts from multiple layers in the stack (including apps) and turn that information into a relevant and contextual IT services view is ground breaking. Allowing SysAdmins, end-users, network admins and the like means that what we are achieving with our partner integrations is going to bare fruit in managing customers expectations going forward.
In addition, knowing what to expose and look for in managing-at-services layer will be one of the core concepts that the new ScienceLogic SPARC program will address via suggested provisioning monitoring templates on a service-by-service basis.
So, outside of the hybrid IT spectrum, what did we learn from our Aussie mates while we were down under?
- Aussie is pronounced Ozzie, and not Ossie.
- It’s capital is not Vienna.
- Footie what they call Football—and it’s neither football nor soccer.
- Vegemite is always an appropriate spread no matter what the occasion.
- A “walkabout,” “fair dinkum” and “Yeah mate” can mean several things, depending on the time of day, and whether you’re making any sense.
And finally, if there was a mantra that best summed up what we’ve achieved for ops teams in Australia to date: “no worries mate.”
Tagged with: hybrid cloud
If seeing is believing, then gaining a view into hybrid cloud environments is pretty important. That’s why I chose that as the last topic in our hybrid IT series with our CTO, Antonio Piraino.
Our focus today: What kinds of questions should IT leaders ask themselves about visibility into their cloud resources?
If you’re new to this series, I secretly ran our CTO’s car battery down so he couldn’t start his car. Then, I graciously offered a jumpstart if he’d commit to sitting with me and discussing the top questions he hears from prospects, customers, and industry pundits about hybrid IT. What can I say, I am just a generous guy!
Let’s jump right in!
Q1: Can lack of visibility and control hold you back? If so, how?
As Cloud platforms begin to be consumed on an IT services basis, so too should IT service management become more than just about SLAs. Having real confidence in your decision to migrate workloads to the cloud requires that you enjoy transparency and visibility in that cloud environment.
That confidence will, in turn, lead to trust in the decision maker – you – and the cloud itself. Although cultural change and internal acceptance are ongoing topics, the expanded message is really about helping C-levels feel assured of their decisions through adequate control and security measures. Achieving such assurance is no easy task and requires a modern approach to keeping the costs down for doing so. Transparency and visibility into the cloud, in this instance, are essential and, ultimately, cost effective.
Q2: Do you know whether the service health & performance of your workloads are uncompromised?
Most legacy monitoring and management systems are able to take a latency measurement from an end user perspective to the applicable web service. Others simply show the uptime and availability of a physical piece of infrastructure. Since not all hiccups in infrastructure cause issues for end consumers, the ability to have visibility and control of the physical IT infrastructure, and to see separately how related services that rely on that infrastructure are performing is truly needed.
Even more important is the ability to correlate data metrics in intelligent ways that illuminate the health and risk a critical service will begin to face in the coming hours, days, or weeks. That’s exactly what a modern monitoring system should be able to do.
Only through the collection of data, the normalization of that data, and the presentation of results in an intuitive format can analytics be truly useful and actionable, including those driven by monitoring tools. In a hybrid cloud environment, such insight becomes even more critical for CIOs who need to maintain control of all elements across the IT spectrum.
Q3: Can you safely manage delivery of the new breed of hybrid IT services?
The 2013 ScienceLogic’s survey of enterprises attending Cloud Expo showed an extremely low level of support for point tools (6%). This is understandable, given what the market has experienced through the sprawl of point tools – large inefficiencies as well as unnecessary costs per tool are likely results of their overuse.
The trend away from point tools has several causes: the changing nature of hybrid cloud environments, the lack of true integration among corporate acquisitions, and the advent of converged infrastructure. Vendors have done a poor job of converging the management tools for those technologies, leaving the door open for vendor-agnostic monitoring and management specialists to fill the gaps. Delivering the new breed of hybrid IT services safely will depend on choosing the right vendor-agnostic monitoring and management solutions for your organization.
Q4: What should you be asking of your service providers?
Recent analysis undertaken by the Enterprise Strategy Group found that within cloud storage SLAs alone, there were a number of variations. MSPs offering bulk storage services online typically have cloud SLAs spelling out what users are entitled to for recourse. Typical service availability reads at a traditional 99.99% level of uptime. The shortcoming in this form of SLA is that it still represents approximately nine hours of annual downtime. Nine hours is a lot of time when critical business applications are involved.
That’s why the more progressive SLAs include the response time of the web service, how often a retry is allowed, retention policies, number of copies, and a tiered credit guarantee with higher credits for lower service levels delivered. CSPs can offer geographically dispersed options to increase backup and recovery, and by default, service levels. Hence the need for more strenuous management tools in the era of cloud which provide increased visibility, control, and assurance to private cloud applications.
Q5: Can all of these questions and fears be answered and mitigated by the correct people, process, and tools?
CIOs and their organizations need the right people, which means skilled and up-to-date IT staff. Processes are often particular to the individual organization, its goals, and its resources. The correct tools are the ones that everybody needs and are easy to use; they update regularly to ever-changing characteristics of what they work on; and they adapt to all configurations. In particular, we have found that the correct tools can reduce the dependency on additional human resources, and more often than not, will actually help the alignment of internal processes.
For example, having a series of escalation procedures and remediation procedures aligned to a variety of the most common performance and security issues in the cloud is a must. The correct tool accomplishes this by first looking at the business policies demanded of the cloud, and then associates all of the possible monitoring and alerts around those business policies to automated actions. The correct tool can restructure the way in which operations are done day to day for maximum efficiency, on-premises or in the cloud.
And there you have it! Are there any other questions IT leaders should ask themselves about visibility? If so, tell us in the comments below!
See our previous posts in this series covering cloud migration, hybrid IT security, and cutting costs with the cloud.
Tagged with: cloud computing
, cloud management
, cloud monitoring
, IT Operations Management