Top 20 Tools Needed for Hybrid IT – #6-10

April 27th, 2015 by

Welcome back to our series on the top 20 hybrid IT tools you need to successfully manage a complex hybrid infrastructure. If you’ve been following along, this is now the third of four blog posts. Did you miss the first two posts? Don’t worry! You can read our first post here, and our second post here.

Today’s post is a nice mixture of monitoring, tracking, and automation. As mentioned in other posts, we welcome any feedback, comments and questions. Please feel free to drop a line in the comment field below.

  • Application Monitoring – Understanding the basic level of infrastructure is vital for ensuring the best in service performance. But application performance is also an important factor and should be included in a holistic monitoring framework. Application performance along with operating system level performance and server based monitoring can give a truly holistic view of a service.
  • Cloud Management and Monitoring – With the world increasingly moving to multi-cloud environments, having a solution to manage and monitor across different public and private cloud environments is the new basis for the ability to operate as an IT organization.
  • Service Level Management – In the end, whether you are an enterprise or a service provider, you are delivering a service to your customers. Most organizations have multiple services with different service levels assigned. Keeping track of those service levels and ensuring you meet them can be a challenge, which is where a product with service level management abilities will help.
  • Ticketing – Keeping track of actions performed on equipment as well as incoming help desk requests and actions performed against those requests is one of the most basic aspects of IT service support. Any ticketing solution you examine should be able to either automatically log incidents based upon events happening in the infrastructure or have an integration with a monitoring solution that provides this capability.
  • Runbook Automation – IT operations professionals face continual pressure to do more with less. An automation platform can help by reducing the need for human involvement, ultimately freeing up staff to take on more strategically important issues.

Ok, there’s a nice little knowledge bomb to start off your week. Be on the lookout for the last post in this series, covering the final five. Also, if you want to see all 20 in one cohesive place, feel free to download this white paper which details all 20 tools you need for hybrid IT environments.

Tagged with: ,

Add comment

Top 20 Tools Needed for Hybrid IT – #11-15

April 14th, 2015 by

Welcome back to our series on the top 20 tools you need for successful hybrid IT monitoring! If you missed our first post covering tools #16-20, be sure to check it out here.

If you’ve made it this far in today’s post, you’ve either:

  • Already read part 1 of this series
  • Skimmed through the bullets
  • Skipped everything and are ready to get into the meat of this post

In any case, I won’t keep you waiting. Our post today covers tools #11-15, focusing primarily monitoring the different technology layers in your infrastructure. Let’s get started!

  • Network Monitoring – Understanding the health of the most basic elements within your infrastructure, such as switches and routers, is vital to ensure your services can deliver as needed. Without the network functioning, your interdependent systems have no way of communicating and your services simply stop operating. Top 20 Tools Needed for Hybrid IT
  • Server Monitoring – While virtual technologies get most of the attention in IT environments today, the underlying hardware that provides the platform for the virtualized technologies is equally important.
  • Storage Monitoring – With compliance and data retention guidelines becoming even more strict, understanding whether you have enough storage capacity and that your storage is available is crucial.
  • Operating System Monitoring – Few organizations are strictly tied to one server operating system. Understanding the CPU performance and memory from an OS perspective can be important when using public cloud based resources, as the public cloud provider may report one CPU performance number, while the OS may experience quite a different performance level.
  • Hypervisor Monitoring – Sitting on top of your physical infrastructure are a number of virtualized servers. Understanding the health, availability, and location of these hypervisors is a complex task, with virtual resources spinning up and down in seconds.

That wraps up our second post on the top 20 tools you need to successfully manage a hybrid IT environment! We’ll be covering tools #6-10 in our next post, so stay tuned!

To see all of the tools you need for hybrid IT monitoring in once place, download our free white paper: The Top 20 Tools Needed for Hybrid IT

As always, your comments and thoughts are welcome and encouraged! Is there something we’re missing? Or, just as important, if you think I’ve missed the mark with my first 10 (the 5 in the previous post and the 5 in this one) – please let us know!

See you in a few days back here on the ScienceLogic blog.

Tagged with: , , ,

Add comment

Top Tools Needed for Hybrid IT

April 6th, 2015 by

Hybrid IT is the new standard in many enterprises across the globe, but for many it is also uncharted territory. A common question that we field is, “what tools do we need to ensure the performance of a hybrid IT environment?” However, this seemingly simple question does not yield an answer quite as simple.

For many years there has been a somewhat antagonistic relationship between IT and the rest of the enterprise. In the past, businesses want more services and better performance at a reduced cost. This need has only been accelerated by new consumer focused cloud-based applications, which promise nearly 100% uptime with peak performance.

Historically, when the IT industry has been challenged to do more with fewer resources, they have always responded with innovation. First, there was virtualization. With virtualization we were promised more efficient use of servers and better control over cooling and power costs. This innovation helped for a short window, but cheap compute and storage created an influx of applications focused on using more compute and more storage, because it was available.

Very quickly, IT was again asked to do more with less, and again, it responded through innovation. This time with public cloud based services such as Amazon Web Services (AWS) and Microsoft Azure.

At ScienceLogic, we’ve worked hard to understand these complex hybrid IT environments, including what makes them work well, and where they fail. During this series of posts, we will cover the top 20 tools needed for hybrid IT.

With the arrival of public cloud storage, IT organizations quickly took advantage of reduced costs for compute and storage. At the same time, they took advantage of the wide breadth of levels of service the cloud providers offered, to ensure they have the right service level for the right workload.

This development brought with it a potent cocktail of greater expectations from users, reduced budgets and a complete hybrid infrastructure. This new infrastructure brings us back to our initial question – “What tools do we need to ensure the performance of a hybrid IT environment?”

Take a look below at the first chunk of tools to help you ensure performance of your hybrid IT environment:

  1. Data Center Infrastructure Management – DCIM solutions monitor the environmentals with a data center as well as monitor some servers and network devices. However, their scope tends to focus on environmentals.
  2. Power Distribution Unit (PDU) Monitoring – At the most basic level, if you don’t have power coming to your system nothing else can operate. Understanding the status of the back-up batteries and even environmentals such as the temperature of your PDUs sitting in your internal data center can help to either eliminate or mitigate any possible power issues.
  3. Asset Management – With servers and storage being automatically created and brought down in seconds across both virtual and cloud-based infrastructures, tracking the use of assets has never been more complicated. An IT Asset Management system is designed to help an organization track all of its IT assets, the warranties, the vendors, configurations, etc. and is a must in this hybrid IT world.
  4. Discovery – An asset management system is only as good as the data within it. A discovery solution is designed to automatically discover any onsite and offsite resources that appear, and automatically load them into your asset management system. This becomes even more necessary in a world where any employee with a credit card can purchase compute and storage capacity in a matter of minutes.
  5. Device and Dependency Mapping – With complexity only increasing, the ability to understand how all of the different elements in an IT environment relate, is becoming impossible. A device and dependency mapping solution takes care of that concern, by automatically mapping the dependencies across different technologies and elements.

There you have it — the first five tools needed to ensure peak performance of your hybrid IT environment. What did you think? Let us know in the comments below!

We’ll be back soon with the next five tools to help you ensure 100% uptime of your hybrid IT environment. To see the complete list of tools, see our white paper on the topic here: http://m.sciencelogic.com/top-hybrid-it-tools

Tagged with: ,

1 comment

Tagging 2.0 – Follow Up to Symposium 2014

March 31st, 2015 by

During the 2014 ScienceLogic Customer Symposium, we hosted a session to introduce our plans regarding “tagging” features in our software releases. As you may recall from the session, tagging commonly refers to two types of tags used in the industry – comma separated values (csv) and key value pairs.

Tags are comma-separated-values that can be assigned to any interface in order to filter on interfaces of interest. The second tagging method, Custom Attributes, may be thought of as key value pairs. Our Custom Attributes come in two varieties: Base and Extended

Tags were introduced for interfaces prior to our 2014 Customer Symposium, with plans of introducing custom attributes in early 2015. I’m happy to report to our customers that our new software release, 7.5.4, introduces initial support for custom attributes.

As we examined use cases for custom attributes, we decided there were two distinct use cases that each warranted a unique way of handling key value pairs:

Base attributes belong to every entity of a given type you assign a base attribute to.  Currently, we support the following entity types: device, asset, interface, vendor, and theme.

A base attribute is very useful when integrating with 3rd party systems.  If you wanted to tie ScienceLogic into an existing CRM tool, you’ll want to have the resource IDs from that CRM tool stored in EM7 so that the two systems are closely correlated.  One might create a base attribute of “CRM_device_id” that could be used to reference the third-party CRM from within EM7 without having to inject any additional data on the CRM side.

Extended attributes only belong to specific entities. Let me provide an example of a situation where you will find extended attributes handy:

Imagine you want every physical router to have an attribute identifying plug type, which would leverage a custom attribute of “Connector Type” with most devices having a value of “C14.” Because only a subset of devices will have a “connector type,” you would use an extended attribute.

Another example of an extended attribute would be adding a “WAN Type” attribute to only WAN interfaces that hold verbose common speed (T1, E1, T3, 10Mb, 100Mb, etc.).  I would not want to see the “WAN Type” attribute listed on every interface since in only is relative to WAN Links.

With the 7.5.4 release, we have introduced the initial API commands to create and edit custom attributes. In addition, the first GUI element of custom attributes has been introduced as an option in the active device selector to dynamically manage group membership leveraging custom attributes.

As with building a house, one must build in layers: Plans, foundation, framing, plumbing, electrical, roof, drywall, etc.  As we embark on 2015, I’d say the 7.5.4 release has many foundation elements and some framing. We’re on our way to constructing the nicest house on the block.

For those who are comfortable with the API, you can see and start testing the functionality under /api/custom_attribute/.  For those not familiar with the API – additional features, functionality, GUI and more are being working on as I type this post.

We plan on adding incremental functionality in each release cycle throughout 2015 – and we’re off to a great start!

Tagged with: ,

Add comment

Regional Recap: A View from the Top

March 26th, 2015 by

Over the the next few months, ScienceLogic will be visiting major cities across the United States to discuss hybrid IT and the changing landscape of monitoring. We were excited to kick off this series this past Tuesday, March 24th, in Arlington, VA!

Our Director of Product Marketing, Peter Luff, presented during our first event, and he recapped his experience for us below.  Take it away, Peter!

ScienceLogic_RegionalEvent_DC4

Our joint presentation with Amazon Web Services on hybrid IT was a great success – and the view was pretty good too!  I have given restaurant and rooftop presentations before, all related to different generations of technology in previous lives: client-server computing, frame relay networking, even Ethernet and Token Ring in the old days (maybe I am getting old!). But with each successive technology cycle it becomes clear that the established vendor world order usually shifts when there is disruption. And there is nothing more disruptive today than public cloud!

With public cloud now going mainstream, new monitoring technologies are needed to cope with it and make it manageable for mainstream adoption, giving rise to a whole new crew of vendors like ScienceLogic.  

ScienceLogic_RegionalEvent_DC Our unique level of hybrid IT visibility is opening the eyes of users to new ways to view public cloud, together with on-premise assets, in a single view. So much so, that we are now being invited to meet large enterprises who not only need to get to the cloud faster but also must manage the resulting hybrid infrastructure. The traditional monitoring players are clearly a day late and a dollar short on hybrid visibility – and the new order is developing, with ScienceLogic on the top of the pile – and the view from the top is pretty good!

ScienceLogic_RegionalEvent_DC3

Interested in attending one of our regional events? Our list of upcoming Lunch Seminars is below for you to view! Click on the link to register and secure your spot:

We hope to see you there! Learn more about our upcoming events and register to attend on our website: www.sciencelogic.com/company/events

Tagged with: ,

Add comment

Cisco Live Melbourne Recap: Business & the Digital Age

March 23rd, 2015 by

Gee Rittenhouse kicked off the first Technology keynote speaker at Cisco Live Melbourne 2015.  The title of his presentation was “Transform Your Business in the Digital Age.” My take on this, as with everything these days in the tech world, is that it’s all about the cloud!  I remember my early days of working at Cisco when the picture of the cloud was used to represent the network. At that time it was very clear what “the cloud” meant.  Today, the cloud is a little more all encompassing, and as Gee stated during his presentation: “Underlying everything is the cloud.”

In summary, Gee indicates that the most important technology trend is the cloud and mobility. That these technology transformations will impact your business when it becomes simple. Cisco is in the process of making this happen through their Cloud Platform and InterCloud.

Gee began his keynote by talking about the technology and trends with respect to the cloud, internet of things, and device mobility. He indicated that the hard part of these technology transformations is being able to determine when these will affect your business. Cisco believes that happens when technology becomes simple to use, and is working toward this transformation by making the technologies simple to use and consume.

Gee’s keynote wouldn’t be complete without providing some interesting facts about the internet. My favorites were the following:

  • Two thirds of mobile traffic is now video
  • 50 billion  internet connected objects
  • 3 million YouTube videos viewed every 60 seconds
  • 77 billion apps have been downloaded

Gee mentioned that, a few years ago, the key concerns from a CIO centered on technology, cloud, security, and mobility. However, today with IT as a service, the top three things are revenue growth, innovation, and cost.

Cisco is addressing fast innovation with the concept of “Fast IT.” Major components of this include mobility and internet of things. How these things are connected is where SDN and NFV come into play.

How does the cloud transform your business? Cisco considers the cloud a holistic system through three principles:

  1. Make it simple – Converge the infrastructure. (Enterprise likes to buy it in prepackaged chunks: servers, sw, catalogs, etc.) Keep it simple so enterprises can focus on their business and not clouds.
  2. Make it easy to consume – Cisco will sell it as HW, as SW, as a service, as a managed service, etc.
  3. Create as a platform – Almost a year ago Cisco announced InterCloud. Now Cisco is building a cloud platform that will be part of InterCloud. Cisco will put their apps on the cloud in a catalog format, making them easy to consume.

Cisco Cloud Platform

Cisco Cloud Platform constitutes three key components:

  • Infrastructure
  • Platform
  • Apps exposed via a catalog

The platform architecture differs between the Enterprise and the Service Provider. The enterprise, for example, utilizes a converged infrastructure, and Cisco has bundles around this converged infrastructure all based on Cisco UCS. Cisco ONE Software Suites defines three categories of bundles: Data Center, WAN, and Access. All bundles include Cisco Application Policy Infrastructure Controller (APIC), which is a key component of Cisco’s Application Centric Infrastructure (ACI) SDN offering. The service providers don’t generally buy converged infrastructure since they buy the components and integrate them themselves.

Cisco provides solutions on top of that infrastructure like Mobile IQ, Cloud DVR, Virtual Managed Services, etc. The bundle is still based on UCS. Additionally, the Service Provider stack is more complex than the Enterprise stack, with the addition of an orchestration layer that helps with the chaining of the applications, the creation of new services, etc.

InterCloud

Cisco has 40 thousand enterprise companies running UCS, in addition to service provider customers. InterCloud brings these two markets together.  The Cisco cloud, partner clouds, public clouds, and enterprise clouds are all tied together in InterCloud, which forms a marketplace.

It’s not just an app sitting on a datacenter, but an app that can be placed anywhere based on geography or any other criteria. This provides not just Cisco apps, but apps from partners, ISV apps, etc. The platform used in InterCloud is not exposed to the customer in any way. APIC is what enables the movement of these apps across the network. APIC provides a consistent policy, regardless of where that app is deployed.

To make this real, Cisco needed an open source platform. In this case, OpenStack was chosen, as Cisco is the number one contributor to the Neutron component (Network Component) of OpenStack. Cisco thought it was critical to base this on an open source platform to encourage and enable adoption of this platform.

InterCloud consists of four types of clouds:

  1. Partner Clouds – Alliance partners like Telstra has the exact same stack as Cisco.
  2. Enterprise Private Clouds – Cisco has 40K UCS customers today.
  3. Public Clouds – These are not going away and Cisco already knows how to move workloads to the public cloud.
  4. Cisco Cloud Services and Applications – Cisco is putting its own applications on top (virtualized routers, load balancers, video, UC, etc.).

Tying these together provides a very rich experience that enables you to use many applications with your choice of where you want these applications to run.

Gee demoed Cisco Marketplace by showing how easy it was for an enterprise to go to the Cisco Marketplace and move an application (in this case Project Squared) from the Cisco Marketplace to the enterprise catalog, making that new application consumable by any member of the enterprise company. Cisco’s key goal here was to make it simple, since as mentioned at the opening of this session, simplicity is what drives adoption of new transformations.

In conclusion, Gee stated that Cisco’s Strategy is about (1) the converged infrastructure, (2) the ability to easily consume these apps and resources in any way, and (3) provide a platform that is open to everyone. Cisco can build on it; Cisco partners and Cisco customers can build on it. This will enable the business transformation into the digital world.

 

Tagged with: ,

Add comment

Exposing the Cloud at Cloud Expo Europe 2015

March 16th, 2015 by

Bigger, better and clamoring for more cloud; that is the best way I can describe this year’s Cloud Expo Europe, held in London. This time the organizers thought they’d give the data center facilities folks a taste of the cloud by having the two shows exist side by side in a large room. To some degree this tactic worked well. Large enterprises, CIOs, and datacenter operators are looking to be more involved in this cloud thing, and Cloud Expo Europe provided ample opportunities for learning and getting started.

A few common threads emerged throughout dozens of presentations over the two days. A few note worthy trends were:

  • Illustrating the need for cloud to become the accepted norm in for IT strategic decision using success stories.
  • How to influence decision makers who are hung up on making the move to a cloud environment by becoming a trusted advisor throughout the process.

On the receiving end of the hoopla were the enterprise, CIOs, and datacenter operators making their queries openly known:

  • What do I need to make my cloud migration happen? A new orchestration tool of some kind?
  • How do I calculate my ROI?
  • How can I figure out what expected operational performance parameters should be?

Moreover, let’s talk about my applications and business related workloads, and how they map into interdependent infrastructure. I prefer AWS! I prefer Azure! And for MSP’s and SI’s making the shift to Managed services, there’s no doubt that the chasm has been crossed. Making AWS or Azure a core part of the solution is no longer taboo, a “maybe” – but a highly acceptable model.

The ScienceLogic booth was alive with discussion throughout Cloud Expo Europe! Not just around CloudMapper and interdependency mapping for hybrid environments, but also around our role in the cloud migration lifecycle. This lifecycle includes cloud migration reporting, fed by IT workload discovery and mapping. These are becoming critical components of the front-end for the migration process.

By this time next year, I anticipate much more discussion around the automation of such a migration process. It’s clear that now is the time to stake claim in the new enterprise transformation process and closely associated cloud migration ecosystem.

ScienceLogic-CloudExpoEurope-2015-1

Tagged with: , , ,

Add comment

Migrating to the Public Cloud: Who & Where

March 9th, 2015 by

Welcome back to our third and final installment of Migrating to the Public Cloud.  If you’re just joining us, feel free to catch up on what you’ve missed. So far I’ve covered why and what to migrate to the cloud, as well as how and when to migrate. To wrap up this series I will be focusing on which cloud provider to choose, and where you can go for more information.

Let’s get started!

Who to Choose as your Cloud Provider

IaaS, PaaS, SaaS, MaaS, TaaS, and FaaS. Sure, I made a few of those up, but what’s the difference between them? And what do these all have to do with AWS? Further, how do you decide which cloud provider fits your needs?

The cloud landscape is changing rapidly, as are the types of clouds being provided. For most enterprises you’ll be focusing on IaaS (Infrastructure as a Service) like AWS, SaaS (Software as a Service) like Salesforce.com, and PaaS (Platform as a Service) like EngineYard. Depending on your application architecture and your users you’ll end up using one or more of these services.

  • Location, Location, Location.
    Consider where your users reside and also where you plan to do business. Ideally your cloud provider will offer a semi-local option so neither latency nor data access/storage (data sovereignty laws) are impacted.
  • Consider the Services Offered
    One of the more amazing things about public cloud providers is the amount of services they offer. Amazon Web Services alone offers more than 30 different services across their multiple data centers and regions. Microsoft Azure, similarly, offers more than 30 different services to ensure you get the most out of your cloud investment.
  • Amazon Web Services
    The reality is Amazon Web Services is the 800-pound gorilla in the market, which is why I am giving them a bullet of their own. They’ve been offering public cloud services for quite some time now, and have the process down to a science. They even offer a fair amount of free access so you can get started without paying a penny. They’ve covered just about every location you need and, as mentioned, offer an amazing number of services. While I would caution it is important to look at a few different providers in your selection process, I would also suggest you strongly consider including AWS in that list.

Where Can You Get More Information?

Where do you go to get more information? Below is a curated list that I found particularly helpful in my understanding of the migration journey:

There you have it, folks! You are now fully-prepared to begin your cloud migration journey. My hope is that this series can serve as a reference guide as you get started on this process.

For those who are mourning the end of this series, I have great news!  Over the next six weeks ScienceLogic will be producing in-depth documents on cloud migration, and we’ll be hosting a webinar too.

Click here to register for our “Taking the Mystery out of Public Cloud Migration” webinar on March 17!

I encourage you to follow ScienceLogic’s developments in this area as we will be delivering a number of new free tools for you and your team to help you in your journey to the public cloud.

Questions? Comments? Leave them below. Look forward to seeing you up in the clouds!

Tagged with: , ,

Add comment

Rubbing Shoulders with Giants of the Cloud Ecosystem

March 4th, 2015 by

Imagine you’re an enterprise CIO trying to figure this cloud thing out.  What are the things that you’d like to better understand and need assurances on?  Likely, it would start with the following:

  • What is the potential cost of your move?
  • What should be moved?
  • Why is connecting into a third party data center better than ad hoc connections from your IT shop via your local ISP?

Finally, how do you gain some measure of security and control over all those things operating in the cloud? Is there a way to validate that they actually do belong there? How will you monitor and visualize the resulting hybrid IT infrastructure for troubleshooting and planning?

Now, imagine you’re the world’s biggest datacenter operator: Equinix. You’re housing thousands of the world’s largest enterprise customers. Many of which are struggling with a coherent way in which to answer all of these, and many other, questions about the cloud.

That’s the challenge that our partner, Equinix, accepted when it chose to augment its premier datacenter operations, and take on the role of cloud facilitator for those large enterprises.  So, what did Equinix do differently? Quite a few things.

Let’s start with the Equinix Performance Hub. These performance hub solutions are a network extension node for enterprises, with connectivity to the world’s largest telecom carriers. That is in addition to a Direct Access Program for service providers to offer cloud services; a series of Solution Validation Centers for SA’s to propose ideal reference architectures; the exposure of a public facing API for programmatic access to the multi-cloud; and most recently the acquisition of Nimbo (Professional services for Hybrid IT architectures).

Most importantly was the creation of the Equinix Cloud Exchange. This creates a seamless connection point and cloud ecosystem for enterprises to access the multi-cloud, multi-network giants.

If you were going to pick the perfect partners to invite to that cloud ecosystem, who would you invite?  You’d probably start with the world’s most prominent cloud providers: Amazon? Check. Microsoft Azure? Check. Salesforce.com? Check. Softlayer? Check. Google? Check. Cisco Intercloud? Check.  But wait! Is just giving the enterprise the option of all of these clouds through a single physical connection (the Cloud Exchange) enough to help the migration cycle?

You’d probably want to include a series of partners that were highly regarded and trusted by those enterprises.  Perhaps some of the ones that are being leveraged by those cloud giants to go to market? Someone similar to a Datapipe, a T-Systems, or numerous other MSP partners? Check, check, check.

That’s great! But the enterprise still needs a way to create and execute a plan to get to the cloud.  They need a series of tools to discover their IT assets and perhaps the state, health and performance of those IT assets, right? These tools would need to do a variety of things, such as:

  • Help uncover what possible workloads belong in the cloud via migration reports.
  • Easily ingest live APM data, and possibly business policies from their existing enterprise Service Management tool
  • Integrate with a migration tool to make the process easy.
  • Support the ongoing operational task of monitoring and managing the resulting distributed hybrid infrastructure – in the Cloud and on-premise.

What if that tool could follow the workload into the cloud? Or perhaps more than one of those clouds, as well as the interdependent assets that remained on-premises, and could validate the architecture, real utilization and performance of that Hybrid IT environment?  Well, Equinix found that tool and it is ScienceLogic.

Where would you place such a tool, to be readily available for your go to market partners? For MSPs, SIs and Solutions Architects to leverage as and when needed?  Most likely in a location that is simple to access, like the Equinix Cloud Exchange. That’s exactly what Equinix did by selecting and deploying ScienceLogic as the first and favored monitoring and management tool on their Cloud Exchange.

We’re thrilled to have our first Cloud Exchange deployment be in Ashburn, Virginia, literally an arm’s length from the giants of the cloud world, with the second one recently deployed at the AWS facility in Frankfurt, Germany at the end of February.  But it doesn’t end there! 

Last week we announced our new collaboration with Equinix, aiming to simplify and ease enterprise migration to the cloud. Gaining access to multiple cloud providers via Equinix Cloud Exchange and leveraging ScienceLogic’s integrated monitoring solution, enterprises can achieve improved performance, security, management and cost-control of their entire IT infrastructure.

Read our full press release on our partnership with Equinix here.

Questions? Comments? Leave them below and we’ll be sure to get back to you!

Tagged with: , ,

Add comment

Migrating to the Public Cloud: How & When

March 2nd, 2015 by

Welcome back to our Public Cloud Migration series! I’ve been on the road quite a bit and busy with a number of customers, but I’ve put a few CPU cycles together and worked on this blog. For those unfamiliar with this series, I’m highlighting what you should consider when you look to migrate to the public cloud, focusing on the 5W’s and How.

In case you missed it, you can see part one of this series covering the why and what of cloud migration here. Today’s post will focus on how and when to migrate to the public cloud.

My goal is to introduce key concepts for a successful migration to AWS (or any other public cloud environment, for that matter). I will dive into much greater detail in our upcoming “Taking the Mystery Out of Public Cloud Adoption” webinar and provide even more detailed information in a white paper we are producing subsequent to the webinar. So, treat these as bite sized morsels to get you ready for the main course!

How To Migrate?

Ok now you know what to migrate, but how do we actually go about doing this? When does a workload move into production? How do I validate that it’s going OK?

  • Dev-Ops
    If you’re building your applications from the ground up and functioning in a Dev-Ops fashion you need to think about building an application using the cloud. Your app should be smart enough to scale compute resources up and down based on demand (which is where public cloud powered autoscaling fits in).
  • Migration Process
    A step-by-step process to build, test, and move into production needs to be followed vigorously for a successful rollout. You should be moving applications based on priority and then divide into chunks. For example you may first start with moving a front-end server to the cloud, test, and then move to production in a hybrid cloud while the backend may reside on premises.
  • Visibility
    As you move, test, and move, you need to ensure you have proper visibility into the application from where it began in your datacenter, to where it ultimately resides in the cloud, and during the transition stage as it moves piecemeal to the cloud. Ideally you should be using the same methods and tools for this visibility to provide an accurate comparison

When To Migrate?

You have an app, a plan, and now it’s time to migrate! Or is it? When’s the right time to move your application into the cloud?

  • Return on Investment
    Over the past decade a number of companies have made significant capital expenditures in data centers, servers, networking, storage, and virtualization technologies. These investments may still have a better ROI over the length of a project than moving everything to the cloud. It often makes sense to move your newest and oldest applications first.
  • Learning Cloud
    Before making the jump to cloud it’s vital to make sure your team understands the limitations, strengths, and weakness of various providers. Building knowledge in the cloud, however, is easier than it has historically been in IT because of the well established communities and free resources available on the web. Additionally the largest providers have started offering free training material as well as certifications to make sure your team is ready for the cloud.
  • Third Party Services
    Most of the large service providers as well as a number of third party consulting companies can help you migrate individual or groups of applications to the cloud. Other companies, such as ScienceLogic, provide monitoring and management services around cloud products and applications.

Ok, how’s that for a quick Monday morning touch on migrating to the public cloud? If you take only a few things from this post, I’m hoping you gathered:

  • The upcoming webinar your’s truly is doing is a must attend (wink, wink, nudge, nudge).
  • When you are looking at “how” to migrate, it’s all about process, process, process, visibility, visibility, and visibility.
  • For “when” focus on that ROI and make sure your team is ready to manage apps in the public cloud (be sure they are fully trained, etc.).

My next post will focus on a final 2 questions: Who helps you migrate? And where to migrate your applications to?

Look forward to seeing you next Monday!

Tagged with: , ,

Add comment

Previous Posts


Our blog’s authors aren’t just experts in their field, they’re also key contributors to our world-class monitoring platform. If you’d like to see how these topics play out in a real-world setting, please register for a free, no pressure demo:

Request a demo

Search


type keywords | hit enter

Share this Page

Navigation

Recent Posts

Categories

Archives

Recent Comments

Subscribe