Migrating to the Public Cloud: How & When

March 2nd, 2015 by

Welcome back to our Public Cloud Migration series! I’ve been on the road quite a bit and busy with a number of customers, but I’ve put a few CPU cycles together and worked on this blog. For those unfamiliar with this series, I’m highlighting what you should consider when you look to migrate to the public cloud, focusing on the 5W’s and How.

In case you missed it, you can see part one of this series covering the why and what of cloud migration here. Today’s post will focus on how and when to migrate to the public cloud.

My goal is to introduce key concepts for a successful migration to AWS (or any other public cloud environment, for that matter). I will dive into much greater detail in our upcoming “Taking the Mystery Out of Public Cloud Adoption” webinar and provide even more detailed information in a white paper we are producing subsequent to the webinar. So, treat these as bite sized morsels to get you ready for the main course!

How To Migrate?

Ok now you know what to migrate, but how do we actually go about doing this? When does a workload move into production? How do I validate that it’s going OK?

  • Dev-Ops
    If you’re building your applications from the ground up and functioning in a Dev-Ops fashion you need to think about building an application using the cloud. Your app should be smart enough to scale compute resources up and down based on demand (which is where public cloud powered autoscaling fits in).
  • Migration Process
    A step-by-step process to build, test, and move into production needs to be followed vigorously for a successful rollout. You should be moving applications based on priority and then divide into chunks. For example you may first start with moving a front-end server to the cloud, test, and then move to production in a hybrid cloud while the backend may reside on premises.
  • Visibility
    As you move, test, and move, you need to ensure you have proper visibility into the application from where it began in your datacenter, to where it ultimately resides in the cloud, and during the transition stage as it moves piecemeal to the cloud. Ideally you should be using the same methods and tools for this visibility to provide an accurate comparison

When To Migrate?

You have an app, a plan, and now it’s time to migrate! Or is it? When’s the right time to move your application into the cloud?

  • Return on Investment
    Over the past decade a number of companies have made significant capital expenditures in data centers, servers, networking, storage, and virtualization technologies. These investments may still have a better ROI over the length of a project than moving everything to the cloud. It often makes sense to move your newest and oldest applications first.
  • Learning Cloud
    Before making the jump to cloud it’s vital to make sure your team understands the limitations, strengths, and weakness of various providers. Building knowledge in the cloud, however, is easier than it has historically been in IT because of the well established communities and free resources available on the web. Additionally the largest providers have started offering free training material as well as certifications to make sure your team is ready for the cloud.
  • Third Party Services
    Most of the large service providers as well as a number of third party consulting companies can help you migrate individual or groups of applications to the cloud. Other companies, such as ScienceLogic, provide monitoring and management services around cloud products and applications.

Ok, how’s that for a quick Monday morning touch on migrating to the public cloud? If you take only a few things from this post, I’m hoping you gathered:

  • The upcoming webinar your’s truly is doing is a must attend (wink, wink, nudge, nudge).
  • When you are looking at “how” to migrate, it’s all about process, process, process, visibility, visibility, and visibility.
  • For “when” focus on that ROI and make sure your team is ready to manage apps in the public cloud (be sure they are fully trained, etc.).

My next post will focus on a final 2 questions: Who helps you migrate? And where to migrate your applications to?

Look forward to seeing you next Monday!

Tagged with: , ,

Add comment

Building a Stronger Hybrid IT Ecosystem

February 25th, 2015 by

In the quest for cloud dominance in today’s crowded market, there is one key attribute (beyond top line revenue) that defines the current leaders: a healthy and vibrant partner ecosystem. The days of monolithic technology stacks that all sit under a single brand are over.

Enterprise customers demand diversity, choice, flexibility, options, and variety to help meet their expanding thirst for hybrid IT solutions. IDC recently put Hybrid Cloud Architectures as their top FutureScape CIO decision imperative for cloud (Source: IDC FutureScape 2014). Hybrid cloud is made up of a mix of on-prem compute, network, storage resources and off-prem cloud services combined with a myriad of management and monitoring technologies that bring it all together.

So, how do you get to the value of cloud faster? Many times that depends on who your trusted advisor is, such as your systems integrator, pro-services consultant, internal IT advisor or perhaps the LOB owner with a vendor preference. In either case, it’s likely going to be multiple technology providers to help achieve project success.

For the past several years, our team at ScienceLogic has been building strategic partnerships with world class technology providers focused on hybrid IT delivery. Our goal has been to provide our customers with choice but also a recommendation of how to go faster. Fortunately we’re not alone in that approach.

One of our strategic focuses has been partnering with Amazon Web Services, the undisputed leader in off-prem cloud services. During our recent participation in the AWS Sales Kick Off in Seattle, it was evident how important ecosystem partners play in the overall success to converting more customers to off-prem cloud. There are different categories of partners that align with different aspects of moving to the cloud during each phase in that journey. In the ISV category of technology partners, you need to demonstrate how different aspects of your solution plays into the overall lifecycle of migrating a workload to the cloud. For the areas that are either not part our core product or not a focus area (such as Application Performance Management, Security, or Provisioning), we look to partner with other ISV’s to create to a more unified solution.

ScienceLogic has built robust integrations between strategic partners that help make up this mature ecosystem so that ultimately we short cut possible questions or concerns and deliver move value as an ISV team. We believe the technology world is becoming an API economy. API’s provide the necessary information and data to drive visibility, which drives the dynamic changes required in the Software Defined Data Center or cloud. Fortunately, our hybrid IT monitoring platform was architected with these new API’s in mind. That gives us the freedom to orient our focus to best-of-breed partners to complete our ecosystem.

2015 is going to be the year of hybrid IT but it’s also going to be the year of the ecosystem as well. Those that can partner with the best will likely rise to the top.

Tagged with: , , ,

Add comment

Migrating to the Public Cloud: Why & What

February 23rd, 2015 by

After years of cloud prediction we’re finally at a point of cloud adoption. The efficiencies are real, the technology has matured, and it’s becoming a business requirement. Now comes the hard part: execution.

During this series of blog posts I’m going to help you try to plan your execution strategy for cloud adoption following the simple 5W’s and How. This post focuses on the “why” and the “what” of public cloud migration.

My high-level goal is to introduce key concepts for a successful migration to Amazon Web Services, or AWS. I’m going to dive into much greater detail in our upcoming “Taking the Mystery Out of Public Cloud Migration” webinar and provide even more detailed information in a white paper we are producing subsequent to the webinar. So, treat these posts as bite sized morsels to get you ready for the main course!

Why Migrate?

Let’s start off with the “why” of cloud migration. You could use Google and come up with dozens of lists, but they all center on a few key topics:

  • Speed 

    We could have titled this something different, like agility or a variety of other interchangeable buzzwords, but the premise is the same: lack of barriers to starting new applications and services. Public cloud providers, and AWS in particular, have been used by startup businesses from the likes of Box, Dropbox, and Netflix to get up and running quickly when building a new platform from scratch. However, these same benefits also apply to migrating existing technologies and deploying new technologies for legacy companies.

  • Cost Savings 

    The startup world quickly found the costs of operating data centers and hosting servers to be major hurdle for starting a new business. Public cloud providers allow people to start something with minimal investment, and as businesses evolve updating processes and technologies, the cloud provides a cheaper alternative to traditional on premises and off premises data center solutions.

  • Mobile Friendly

    The public cloud is everywhere now and that solves many challenges in building mobile friendly applications. For example, AWS’s Dynamic Auto Scaling allows for an application to grow as the user base grows. Additionally you could use services like CloudFront or distribute your application across the various Regions and Availability Zones that AWS provides, reducing latency and protecting client data for compliance purposes.

What to Migrate?

Here’s the big one, for the purposes of this exercise we’ll remove SAAS providers (Office 365, Gmail, Salesforce, etc.) and focus on your apps and the feasibility of running them in a public, private, or most probably a hybrid cloud. Things to consider when making this move:

  • Sensitivity of Data / Data Warehousing Requirements

    Does your data need to reside in one safe place? What about government regulations? As you migrate to public cloud environments these are some of the questions you will need to consider.

  • Application Requirements

    How much compute do you need to run your application? You need to consider both the amount of compute as well the platform on which it’s built, and most importantly how the application is used. As an example, if the application is heavy in data and storage it may make more sense to leave onsite rather than offload it to the public cloud.

  • Visibility into performance

    To determine the aforementioned you need detailed statistics and visibility into how your app is performing.

  • New Challenges

    The cloud, while powerful, isn’t always super simple. With it come a host of new challenges that your operations staff may not have seen before, for example, “noisy neighbor” (when a single application is consuming too many shared resources).

Now you know key information on why and what you should migrate to the public cloud. Check back here next Monday to learn how and when to migrate.

Questions? Comments? Leave us a note below and we’ll get back to you!

Tagged with: , , ,

Add comment

ScienceLogic Symposium: We Can Learn a Lot From Our Aussie Counterparts

February 17th, 2015 by

Last week saw the inaugural Customer Symposium for ScienceLogic in Australia, or more technically for the Asia-Pac region.  Home of the Australian Open, a good cup of coffee, and the launching site for the ever-popular Cricket World Cup that kicked off on Saturday against perennial cricketing enemy—England. The ScienceLogic team descended on Melbourne, ready to absorb the local culture, which was voluntarily thrust upon us in short order.

With the world’s largest cricket pitch, the famed Melbourne Cricket Ground (MCG), as our backdrop, we kicked off the Symposium under the banner of “connecting the cloud.”  In a similar vain to the theme percolating in North America and Europe, there was a great deal of interest in the migration of customers to the cloud. As well as in managing AWS, Azure and other public cloud environments as the Trusted Advisor into the Enterprise.

In a perfect culmination to the day, two customers spoke to the crowd about their experiences with ScienceLogic.  Here is an overview of their stories:

First up, VMtech talked about the initial trouble of finding an appropriate vendor to solve the expanding problem of virtual cloud sprawl and hybrid IT. In addition, finding a vendor with genuine multi-tenancy and multi-functionality embedded in a single platform, with only one clear leader emerging in that quadrant.

Secondly, a MSP focused on the onboarding of enterprises to the AWS cloud, making money on both the up-front consultation, and ongoing management of those assets on behalf of customers.  As with their North American peers, enterprises in Australia are in search of a trusted advisor to help accelerate their migration to the public clouds, via discovery, migration reporting, and validation of the performance and health of assets in the cloud. All of the things that have made ScienceLogic emerge as the preferred partner to AWS, Equinix and others at the front-end (i.e discovery) and back-end (i.e run/operate) of the migration cycle.

We also saw another major theme emerge: the need to correlate more data, and understand best practices in monitoring and managing apps and services overlaying in hybrid IT environments.

The idea that a platform can take data, events, and alerts from multiple layers in the stack (including apps) and turn that information into a relevant and contextual IT services view is ground breaking. Allowing SysAdmins, end-users, network admins and the like means that what we are achieving with our partner integrations is going to bare fruit in managing customers expectations going forward.

In addition, knowing what to expose and look for in managing-at-services layer will be one of the core concepts that the new ScienceLogic SPARC program will address via suggested provisioning monitoring templates on a service-by-service basis.

So, outside of the hybrid IT spectrum, what did we learn from our Aussie mates while we were down under?

  • Aussie is pronounced Ozzie, and not Ossie.
  • It’s capital is not Vienna.
  • Footie what they call Football—and it’s neither football nor soccer.
  • Vegemite is always an appropriate spread no matter what the occasion.
  • A “walkabout,” “fair dinkum” and “Yeah mate” can mean several things, depending on the time of day, and whether you’re making any sense.

And finally, if there was a mantra that best summed up what we’ve achieved for ops teams in Australia to date: “no worries mate.”

Tagged with: ,

Add comment

Seeing is Believing: Gain Visibility into Hybrid Cloud Environment

February 16th, 2015 by

If seeing is believing, then gaining a view into hybrid cloud environments is pretty important. That’s why I chose that as the last topic in our hybrid IT series with our CTO, Antonio Piraino.

Our focus today: What kinds of questions should IT leaders ask themselves about visibility into their cloud resources?

If you’re new to this series, I secretly ran our CTO’s car battery down so he couldn’t start his car. Then, I graciously offered a jumpstart if he’d commit to sitting with me and discussing the top questions he hears from prospects, customers, and industry pundits about hybrid IT. What can I say, I am just a generous guy!

Let’s jump right in!

Q1: Can lack of visibility and control hold you back? If so, how?

As Cloud platforms begin to be consumed on an IT services basis, so too should IT service management become more than just about SLAs. Having real confidence in your decision to migrate workloads to the cloud requires that you enjoy transparency and visibility in that cloud environment.

That confidence will, in turn, lead to trust in the decision maker – you – and the cloud itself. Although cultural change and internal acceptance are ongoing topics, the expanded message is really about helping C-levels feel assured of their decisions through adequate control and security measures. Achieving such assurance is no easy task and requires a modern approach to keeping the costs down for doing so. Transparency and visibility into the cloud, in this instance, are essential and, ultimately, cost effective.

Q2: Do you know whether the service health & performance of your workloads are uncompromised?

Most legacy monitoring and management systems are able to take a latency measurement from an end user perspective to the applicable web service. Others simply show the uptime and availability of a physical piece of infrastructure. Since not all hiccups in infrastructure cause issues for end consumers, the ability to have visibility and control of the physical IT infrastructure, and to see separately how related services that rely on that infrastructure are performing is truly needed.

Even more important is the ability to correlate data metrics in intelligent ways that illuminate the health and risk a critical service will begin to face in the coming hours, days, or weeks. That’s exactly what a modern monitoring system should be able to do.

Only through the collection of data, the normalization of that data, and the presentation of results in an intuitive format can analytics be truly useful and actionable, including those driven by monitoring tools. In a hybrid cloud environment, such insight becomes even more critical for CIOs who need to maintain control of all elements across the IT spectrum.

Q3: Can you safely manage delivery of the new breed of hybrid IT services?

The 2013 ScienceLogic’s survey of enterprises attending Cloud Expo showed an extremely low level of support for point tools (6%). This is understandable, given what the market has experienced through the sprawl of point tools – large inefficiencies as well as unnecessary costs per tool are likely results of their overuse.

The trend away from point tools has several causes: the changing nature of hybrid cloud environments, the lack of true integration among corporate acquisitions, and the advent of converged infrastructure. Vendors have done a poor job of converging the management tools for those technologies, leaving the door open for vendor-agnostic monitoring and management specialists to fill the gaps. Delivering the new breed of hybrid IT services safely will depend on choosing the right vendor-agnostic monitoring and management solutions for your organization.

Q4: What should you be asking of your service providers?

Recent analysis undertaken by the Enterprise Strategy Group found that within cloud storage SLAs alone, there were a number of variations. MSPs offering bulk storage services online typically have cloud SLAs spelling out what users are entitled to for recourse. Typical service availability reads at a traditional 99.99% level of uptime. The shortcoming in this form of SLA is that it still represents approximately nine hours of annual downtime. Nine hours is a lot of time when critical business applications are involved.

That’s why the more progressive SLAs include the response time of the web service, how often a retry is allowed, retention policies, number of copies, and a tiered credit guarantee with higher credits for lower service levels delivered. CSPs can offer geographically dispersed options to increase backup and recovery, and by default, service levels. Hence the need for more strenuous management tools in the era of cloud which provide increased visibility, control, and assurance to private cloud applications.

Q5: Can all of these questions and fears be answered and mitigated by the correct people, process, and tools

CIOs and their organizations need the right people, which means skilled and up-to-date IT staff. Processes are often particular to the individual organization, its goals, and its resources. The correct tools are the ones that everybody needs and are easy to use; they update regularly to ever-changing characteristics of what they work on; and they adapt to all configurations. In particular, we have found that the correct tools can reduce the dependency on additional human resources, and more often than not, will actually help the alignment of internal processes.

For example, having a series of escalation procedures and remediation procedures aligned to a variety of the most common performance and security issues in the cloud is a must. The correct tool accomplishes this by first looking at the business policies demanded of the cloud, and then associates all of the possible monitoring and alerts around those business policies to automated actions. The correct tool can restructure the way in which operations are done day to day for maximum efficiency, on-premises or in the cloud.

And there you have it! Are there any other questions IT leaders should ask themselves about visibility? If so, tell us in the comments below!

See our previous posts in this series covering cloud migration, hybrid IT security, and cutting costs with the cloud.

Tagged with: , , ,

Add comment

Could Hybrid IT Help You Cut Costs?

February 11th, 2015 by

Welcome back to our interview series with ScienceLogic CTO, Antonio Piraino! If you’re new to the hybrid IT party, check out our previous post about cloud migration and IT security.

Today I’m focused on the green stuff (well, at least if you live in the US) – money. We often hear that the public cloud saves money, at least I do. But the thing is it’s a bit more complicated than that. So, how do you get the best bang for your buck? I sat down with Antonio to get the low down on the financial implications of hybrid IT.

Let’s get started!

Q1: Do you know your total cost of ownership (TCO)?

All too often, hybrid cloud migrations result in sticker shock, especially following a series of cloud IaaS deployments without any reservation or contract in place. Lack of control over the total volume of auto scaling allowed for instances only makes the sticker shock worse. Money spent in one place, however, can mitigate expenses elsewhere. 

For more strategic cloud deployments, you should carefully balance the seemingly high cost of an IaaS deployment with the historical operations, MTTR, licensing, human resources, networking, storage, and hardware maintenance and operational costs. Despite the fact that these are often sunk costs, plus the fear of insufficient budget for a move to the cloud, very defendable calculators are available to show the long term TCO reduction possible with the cloud, if all variables are included.

Q2: Are you getting the best bang for the buck/cost per performance

Notwithstanding the above TCO discussion, the recent Cloud Industry Forum survey confirmed that ROI vs. on-premises delivery was not the main reason for choosing the cloud. Rather, the primary measuring sticks for making the move were flexibility of delivery (58%), scalability (65%), and general performance expectations alongside operational cost savings (15%). The challenge for a CIO who wants to examine cost as a justification, however, lies in the fact that not all clouds make historical performance metrics available, at least from a per region perspective.

In our experience, we’ve found a material difference in performance, even with cloud platforms stretching across multiple regions that come at costs outweighing lower prices.

Q3: Are you aware of your over-provisioned, forgotten resources and runaway workloads?

Virtualization resolved the issue of physical server sprawl in the datacenter, only to be replaced by VM sprawl. Similarly, the benefit of cloud services introduces both a benefit and a longer-term hidden threat, namely the abstraction of infrastructure control that increases over time. 

As an example, many users within AWS are firing up EC2 instances alongside numerous EMR (Elastic Map Reduce, which uses Hadoop to process large amounts of data) instances for specific jobs. Once complete, they will often shut down the EC2 instances but forget about the EMR jobs. Those EMRs are no longer running but still sitting out in the ether unused and costing the company money for unnecessary resources. Once the EC2 instance is removed, detecting the idle existence of the EMRs out in the ether is impossible without outside assessment — hence, the need for independent tools to keep track of these scenarios.

Q4: Do you know the right payment model for your cloud deployment?

It is a well known but fascinating fact, that AWS has dropped its prices 42 times since 2006. Furthermore, the cost of an Amazon EC2 instance has decreased 56% in just the past two years. In turn, this has motivated many other cloud providers to both reduce the cost of their cloud offerings, and disperse the overall cost of their cloud solutions by offering a series of discrete cloud modules or components that are often difficult to quantify. 

The difficulty with cloud platforms is that there is the added challenge of using spot prices (instances whose prices that are bid on and used until a higher bid comes through), on-demand pricing (by the hour), and reserved instances (for dedicated or committed resources). Add to that the approximately 40 services offered by AWS, and the complexity and ability to aggregate, plan and limit cost can be a challenge.

Q5: How do you optimize for cost and scale?

According to the Cloud Industry Forum, users of cloud services on average are achieving a 9% cost savings over-on-premise deployments. The ways to get those savings, however, are highly nuanced. Planning is essential, as the best public cloud economic models require commitment from executive sponsors and the rest of the organization. Understanding and striving to achieve your current and desired future cost thresholds, especially as they pertain to KPI’s or desired outcomes, is where most companies fall short. Employing tools that show cost thresholds (and trajectories) alongside performance metrics (IOPS, for example) as well as offer some understanding of risk (and health), especially when deploying on shared infrastructure, is achievable, but should be planned for in advance of a move to the cloud.

Antonio dropped some great knowledge bombs on us. Here are my key takeaways:

  • Cheaper isn’t always better. Paying for improved performance may outweigh the novelty of a low sticker price.
  • Spot pricing can make it challenging to select payment models.
  • In order to optimize for cost and scale and achieve cost-savings CIOs must plan, plan plan.

We’ll see you back here for our finally installment of our hybrid IT series. We’ll be covering visibility into hybrid cloud environments!

Questions? Comments? Leave them below and we’ll be sure to get back to you!

Tagged with: , , ,

Add comment

Hybrid IT Security: Five Questions IT Leaders Need to Ask

February 10th, 2015 by

It’s all about hybrid IT security in today’s post. I sat down with our CTO, Antonio Piraino, and chatted about the top questions he receives concerning security and hybrid IT, and he gave me some great nuggets. If you’re new to this blog series, check out our introduction post here. Antonio is a former industry analyst, so I figured he’d be happy waxing poetically about hybrid IT and I was right! So, lets jump right in:

Q1: What security and assurances should you look for?

According to a recent CIF Study, 98% of companies have never experienced a breach of security when using a cloud service. The security risks inherent in clouds do not necessarily make them any more vulnerable than many of today’s top tier private data centers. Still, to provide customers with greater peace of mind, individual cloud providers offer different degrees of advanced security that minimize, if not mitigate, varying levels of risk. Most Managed Service Providers (MSPs) providers, for example, include a base level of intrusion detection (IDS) and prevention (IPS). But, increasingly, Cloud Service Providers (CSPs) are offering layered security models, starting with single sign-in with authenticated devices, multi-factor authentication (MFA), encrypted data storage, secure VPN connections, private subnets, and other options that all come at increased expense.

Q2: Where are the less obvious vulnerabilities in hybrid cloud environments?

Aside from the typical security considerations mentioned above, a number of softer, less apparent vulnerability points exist when operating in third party clouds. For example, in AWS each Virtual Private Cloud (VPC) requires its own set of security policies. But with so many organizations deploying hundreds of VPCs, human error becomes increasingly more likely, allowing the wrong instance to be deployed to the wrong VPC.
This scenario could engender a whole host of security challenges or compliance issues, and once all of your instances are deployed, determining whether they are all deployed in the correct VPC isn’t easy. Here is where — and why — having the right visibility becomes critical.

Q3: What happens when my cloud fails?

Over the past eight years, we’ve observed a number of disasters that occurred for a variety of reasons, resulting in, at times, significant downtime from top cloud providers. In each case, organizations have thrown up their collective IT arms in disgust at the cloud provider’s failing.
In reality, the onus was actually on the cloud customer who should have gauged beforehand the relative importance of downtime attributable to mechanical, electrical, human, or even software failure. Having a DR/backup plan should be your norm, as should an SLA attached to your IT crown jewels.

Similarly, having duplicate instances in the same availability zone, for example, is a recipe for disaster — historic and geographic redundancy data is increasingly available from cloud platforms, although not always collected by the CSP itself.

Q4: Will I be locked into any foolish/un-secure/underperforming decisions?

Exit strategies, contract lock-in, and data ownership are among the top concerns identified by a Cloud Security Alliance (CSA) Information Systems Audit and Control Association (ISACA) survey. Unlike in the past, current vendor lock-in is not about interoperability between infrastructure components. Rather, it’s about being locked into a single service or datacenter serviced by a single telecommunications carrier.

What’s more, the administration tools the cloud providers may give you to configure and maintain the application will be, for the most part, controlled by the cloud provider. You should ensure that your CSP understands these concerns and provides you with adequate liberty relative to migration tools, network density, and contract flexibility.

Q5: What are the risks for information security and data sovereignty?

These security and compliance concerns are becoming more pressing than ever. Many Cloud Service Providers mistakenly under-advertise their regulatory compliance. Asking for their accreditation is usually a good place to start, but be aware of the nuances in the accreditations.

For example, PCI DSS is a proprietary information security standard that specifies 12 requirements for compliance, each with a number of sub-requirements. Most CSP’s will be focused on meeting the first control objectives around building and maintaining a secure network, which entails deploying a firewall and not providing vendor supplied defaults for system passwords. Many CSPs may become level 1 PCI DSS service providers, but getting to additional levels requires the ability to handle a significant upscale in transactions and other requirements.

It’s important to understand your own industry since healthcare, finance, retail, and government have standards that may necessitate a multi-cloud solution. In fact, the Cloud Industry Forum’s newly released survey of a broad spectrum of organizations in the UK showed that, for companies with more than 200 employees, as many as 48% had 2-5 different cloud-based services. The percentage was even higher for small businesses.

What questions do you think IT leaders should be asking? Let us know in the comments below!

Also, keep an eye out for the next installment in our hybrid IT series. I’ll be talking to Antonio on the green stuff – money.

Tagged with: , , , ,

Add comment

5 Questions IT Leaders Should Ask before Migrating to the Cloud

February 4th, 2015 by

It seems that everywhere I turn, I hear the term “hybrid IT.” It’s certainly made a mark and is quickly becoming the new normal for IT. But a lot of the times I hear the term it comes in the form of a question or two. I’ve heard questions come from customers, prospects, and industry pundits. So it seems to be on just about everyone’s mind.

I’m not the only who has heard these questions. Our CTO, Antonio Piraino has not only heard them, but been asked directly. As a former industry analyst, he has a unique take on this move toward hybrid IT.

So I was thinking, with so many people having all of the questions, we might have the makings of a decent series of blog articles. And that brings us to this blog.

Over the next few weeks, I’ll be sharing bite-sized blogs chocked full of information from of my interview with Antonio. They will cover a variety of topics under the hybrid IT umbrella, and will hopefully help you gain a deeper understanding of hybrid IT.

Ready? Let’s get to it. Today we’re tackling questions about cloud migration.

Q1: Can you estimate the cost of migration to the cloud?

The benefits of moving workloads off-premises are much more than just a shift from CapEx to Opex. While inherent cost efficiencies exist, they may not be obvious at the outset. In an October 2014 survey by 451 Research’s ChangeWave service, 49% of enterprises surveyed said that migrating to the cloud had no impact on their budget for other IT products and services, while 20% said that it even decreased their budget, and 14% had no idea what the fiscal impact was.  The same data viewed another way shows that the vast majority of those migrating to the cloud (91%) will increase (36%) or maintain their current spend (55%).  So CIOs should look beyond absolute dollar costs: most enterprises speak of agility and flexibility as greater drivers of cloud migration, particularly with respect to the launching of greenfield apps.  In essence, the opportunity cost of slow deployment and TCO are the big picture considerations when considering migration to the cloud.

Q2: I’m already set up in-house. Why should I move to the cloud?

While SaaS is the explosive grower in the world of cloud computing, IaaS is gaining ground, and for good reason. Arguably, the CIO’s hardest task is facilitating mission critical applications, and these are often customized with specific, sometimes extensive, infrastructure needs — hence, the enterprise growth in adopting IaaS. Gartner’s CIO Report from February 2014 cites similar trends: business intelligence/analytics was generally seen as the top application being outsourced to the cloud, followed by mobile applications, digital marketing content, CRM, and collaborative apps, all infrastructure intensive. (Email and hosting services have long been outsourced to hosting providers for the same reason.) Those mission critical applications CIOs kept in-house tend to be legacy ERP, accounting and financial apps, and highly secure and legacy customized applications. Core applications tend to be renovated with modern software, and are often consigned to private clouds by many enterprises.

Q3: How do you burst and move workloads out?

A new breed of managed hosting providers and VARs are available to assist enterprises with migration to external cloud datacenters. From this need, trusted advisors have evolved to help in a variety of specific areas: configuration migration (Racemi and Rivermeadow), data migration (Broad Peak Partners), orchestration (Scalr and Citrix), configuration automation (Chef and Puppet), performance management (ScienceLogic and New Relic), direct connections (Equinix and Telx), and even reference architectures from a variety of cloud, datacenter, software, MSP, and SI providers.

Q4: What problems about Day 2 operations should you anticipate?

Now that resources are split between on-premises and off-premises, viewing health and availability of delivered business services end-to-end can become challenging. CIOs must remember to focus on the degree of transparency and control required for a hybrid cloud environment.

Should an issue arise, you could look at deployed instances in a third party cloud, using the local cloud tools, but won’t see the correlation of the application running on top of those instances with the ones running on-premises. Trouble detecting the relationship is made worse by the fact that, for example, seeing how storage relates to your compute cycles in a different environment is difficult, if not impossible. The real issue, then, is that you’re unable to know when a real outage or performance problem occurs in this hybrid world, let alone perform root cause analysis.

Q5: How can you accelerate migration and unlock benefit and value early

Speeding up a migration is usually a question of internal preparedness. The CIO’s greatest asset is an educated, informed IT staff. In December 2013, a ScienceLogic showed that half of all respondents participating in cloud initiatives within their organizations needed more education on technology. The respondents also noted that their current skill sets did not adequately prepare them to do their jobs well in the coming year. Specifically, respondents believe they needed more education on cloud technologies.

To address this need for greater cloud skills among IT professionals, more and more cloud providers and software vendors are offering online courses for both the business and technical staff. Also, more intuitive tools are making control and visibility in the cloud easier than ever for cloud operations.

Ok, if you’ve made it this far through the post, my hat’s off to you! Here are my key takeaways from this session with Antonio:

  • Look beyond simply dollars and cents to include “soft” things such as agility.
  • Some workloads fit best on-prem and some fit best off-prem.
  • There are some skill gaps IT employees have that are acting as a barrier to public cloud and hybrid IT adoption which must be solved through education/learning.

I’d be interested to hear your thoughts! Questions, comments? Let’s hear it!
(Hint: that is a request to add to he conversation in the comment section below!)

Tagged with: , ,

Add comment

See Your Cloud Clearly with MapMyCloud.net

February 2nd, 2015 by

No two clouds are the same. They come in different shapes and sizes, and they are always changing. In size, level of service and an increasing number of geographic locations that cloud services can run in.  For these reasons, keeping track of cloud-based resources can be a nightmare. That’s where MapMyCloud comes in.

Visualizing and understanding the hierarchy, topology, and dependencies across a dynamic, elastic set of services is, at best, a labor intensive process using the native tools your cloud provider offers. But we’ve got some good news – MapMyCloud.net was built to solve this problem.

MapMyCloud.net is a free tool that lets anyone using public cloud technologies automatically map and visualize all of their Amazon Web Services (AWS) cloud resources. It is a web-based mapping tool that provides a simple, elegant way to see all your public cloud assets that are live and running.

It helps you understand your clouds, brag about the size of your clouds, and most importantly, see your clouds with your own eyes.  And it’s free. (Yes, really free.)

Why give this tool away with no strings attached?  We have one mission: Show the world that there is an easier way!  You can, quite literally, see your clouds and make important decisions faster. This helps to solve problems, save money and improve business uptime using public clouds.

Below are a few examples of different clouds already mapped for free. Like I mentioned, no two clouds are the same. You can see the major differences in size, shape and service dynamics of each.

Using MapMyCloud can do a number of awesome things for you, including (but not limited to) the following:

  1. Ensuring data sovereignty laws are upheld by viewing your cloud resources based on geographic placement.
  2. Ensuring you aren’t wasting money or jeopardizing compliance by spotting rogue or orphaned instances and services in regions where they weren’t intended.
  3. Reducing risk by viewing Availability Zone placement to ensure workloads are balanced across different zones.
  4. Ensuring security compliance by confirming the right resources are in the right subnet and availability zone.
  5. Reducing root cause detection time by gaining visibility of EC2 instances and the other services that each EC2 instance belongs to. (E.G. Association with ELB’s, Auto Scale Groups, EBS volumes, RDS services, etc.)
  6. Visibility of S3 buckets and which Cloud Trail service is dependent on each S3 Bucket.

Does your organization know what’s in the cloud right now? It is getting increasingly difficult to monitor the cloud, and daily evolution. Stop wasting valuable time with manual mapping tools and take a real-time look at your clouds right now.

Ready to get your head in the clouds and see what you’ve been missing? Get started on MapMyCloud.net today.

 

Tagged with: , , ,

Add comment

Converged Infrastructure: Bringing Maturity to the Adolescent Cloud

January 29th, 2015 by

Expectations Drive Redefinition of IT Infrastructure

Back in the nineties, John. C. Dvorak had a radio show about Tech called “Software/Hardtalk”. His tag line was (I paraphrase) “Remember… whatever I told you this week, will be null and void by this time next week.”  How true that is! It seems all the assumptions underlying how we deliver and consume IT are in flux.

People expect to get the best information whenever and however they want. Given those expectations, how do we build a new, more liquid, data infrastructure to deliver that experience? Changes are required and they are driving a reexamination and redefinition of IT infrastructure.

There has been an almost 50% increase in worker productivity in the United States work force since 1990. Much of that is a product of enhanced IT. However, while IT has risen to the fore, it is now a victim of its own success. Well executed IT is now seen as a competitive advantage for businesses. So, IT must innovate to keep pace with rising expectations, growing volumes of data, and lower cost while being told to enrich the services it delivers.

Standardize and Scale

Just as custom car manufacturing gave way to the Ford, IT infrastructure must be tooled up for mass production.  The same rules apply: standardize and scale. Like an assembly line, make IT more:

  • Manageable
  • Scalable
  • Efficient
  • Cost-effective
  • Interchangeable
  • Consumable

To achieve these goals, practitioners have been squeezing uniqueness and complexity out of IT for some time. The evolutionary chain began with dedicated application infrastructure. Hardware was costly and difficult to manage, but it was also integral to that application.

Then came virtualization, which detached the application for the hardware and, all resources could be placed in undifferentiated pools. These pools became “clouds”, and suddenly, why did the app ever need to run in your data center and on your hardware, at all?! Move everything to the cloud! Not so fast! 

The Cloud is Young and Awkward

While cloud seems like a great way to relieve some of that pressure to adapt, the transition to “service delivery” and  “cloud” will be as awkward as any puberty you’ve ever seen. Cloud technology is quite young and it doesn’t know what it needs to be just yet. But, it’s learning. Even though some applications are ready for the cloud, many are not. And, conversely, the cloud is just not ready for some applications. The situation is as awkward as an eighth-grade dance.

Barring the emergence of sudden, miraculous maturity in cloud technology, how can an organization relieve some of the pressure to increase IT efficiency? By embracing the concept of converged infrastructure!

What are the Benefits?

Typical converged infrastructure consists of compute, networking and storage hardware. They are integrated and certified to operate as a single, reliable, predictable unit of data infrastructure. Extending or upgrading capacity becomes as simple as adding additional units, like adding “bricks” or “modules” to a structure.  As these modules are engineered with a specific operational envelope in mind, they scale at a predictable rate. So, you know when you are approaching the limits of your resources and need to add another brick.

Because each unit of converged infrastructure incorporates a known set of hardware and software components, the cost of each unit is well understood. This predictability of the cost of converged infrastructure allows organizations to normalize and manage their IT budgeting, and add units of capacity when they are needed, and no sooner.

Improve Maturity and Increase ROI

Once you deploy converged infrastructure, the benefits of the economies of scale will kick in. The management tools used to deploy and manage your first module of converged infrastructure will be able to manage each subsequent unit you add. So, you get to keep the tools you choose and the learning curve flattens out for your IT staff. You can standardize your processes, standardize your training and on-boarding, achieve faster time-to-resolution for issues, and increase your ROI.

There’s much more to converged infrastructure and a whole set of benefits that are not included here. However, if you are feeling the pressure to improve IT efficiency while you wait for the cloud to mature, take a closer look at converged infrastructure. It will save you money and buy you time.

Already using a converged infrastructure system? ScienceLogic provides total monitoring for FlexPod and Vblock, powered by CloudMapper.

If you are interested in learning more about the current state of converged infrastructure, download our free webinar “The Promise and Reality of Converged Infrastructure” here: http://bit.ly/1uQY4qG

Tagged with: , , ,

Add comment

Previous Posts


Our blog’s authors aren’t just experts in their field, they’re also key contributors to our world-class monitoring platform. If you’d like to see how these topics play out in a real-world setting, please register for a free, no pressure demo:

Request a demo

Search


type keywords | hit enter

Share this Page

Navigation

Recent Posts

Categories

Archives

Recent Comments

Subscribe