I am never quick to accept an invitation to a webinar. We only have so many hours in a day, and, it would take something very relevant in order for me to close out of my e-mail completely and ignore all of my incoming calls for an entire hour.
I had the pleasure of observing a test run of ScienceLogic’s upcoming “Ten Signs Your IT Ops are Really Ugly” webinar, hosted by our CTO Antonio Piraino, and 451 Research’s Michael Coté. It is definitely worth tuning into, and not just because the graphics on the slides are fun and Coté is a former coder turned comedian.
As indicated in the title, Antonio and Coté will address the common red flags in networks everywhere – signs you may need a new approach in making your network more efficient, and well, prettier. Both Antonio and Coté have sat through their fair share of webinars that went on just a little bit too long. And, Coté himself said he never wants a webinar to turn into a “technology is awesome” lecture.
Chances are, when you tune into the webinar, you will discover that your network is indeed ugly. Buying a development tool isn’t easy – these are big, comprehensive tools, and at times very pricy. And then, actually deploying the product is rarely seamless. Many networks have a whole slew of management tools, all made by different manufacturers, so of course they don’t all fit together perfectly when installed initially – and maybe not ever. The webinar will swiftly move through all the signs you may need to reinvestigate your development operations.
Coté and Antonio won’t tell you what to do to solve your problems – they know we don’t have all day. I like the way Coté put it – each bullet point that the hosts go through are like “bookmarks for the people to go and investigate more.”
So, bottom line, if you are not 100% certain that your network is clean, perfect and pretty, then you should definitely be logging into this webinar on Tuesday at 11 EST. If any of your best and talented engineers are wasting their precious time and brain power on fixing every day, routine network problems, you will definitely find this webinar relevant.
And, did I mention the cool graphics? Here’s a sneak peek of one:
Tagged with: 451 Research
, it operations
As the Holiday Season is wrapping up and we get ready to head into 2014, I got to thinking about what is in store for us ahead. By us, I don’t just mean ScienceLogic; I mean all of us – the entire cloud market. After all, that’s where the future is headed, isn’t it? To a place where everything is stored and run inside the cloud? You may not have totally jumped on that bandwagon just yet, but you probably still agree that the cloud will soon be a critical aspect of the technology world.
With this in mind, I turned my attention to ScienceLogic’s 2014 Cloud Computing Predictions Survey results that were released earlier this month. As a former analyst (programmed to examine, evaluate, and expose facts), I have decided to share my findings with you.
A good starting point for predicting the unpredictable (as is the future of any market) involves looking at what the participants of the market plan to do. 50% of the industry said that they will be increasing their overall IT spending in 2014, while only 15% reported a decrease in IT spending; in other words, three times as many people will spend more on IT in 2014 than spend less. I would suggest that this increase represents the biggest spending increase across a company’s entire budget: a definite positive signal for our market.
Next I’d like to probe a little deeper, into the less comfortable and less examined zone of the individuals; the actual people out there who are making these decisions. When asked if they will personally make more or less money in 2014, almost 50% of people said that they expect to earn more. But just think about that question for a second. Do you actually know how much you’ll have earned 365 days from now? Do I know? I certainly don’t know. But it’s amazing, half of respondents were confident that they’ll be hitting higher numbers next year. There’s an amazing phenomenon that has been proven over the years: if you ask the industry a financially motivated question and the majority answer favorably, that event tends to happen. What’s more, if people believe they are going to earn more, they typically have a sense of better job security. More job security means that they can afford to take those higher risks that tend to have a higher reward. And the prediction becomes a self-fulfilling prophecy.
This raises the question of compensation – are these people fairly rewarded for the work that they do? 42% of people believe that they are currently underpaid in their position – and nobody wants to stay in an underpaid job. Yet at the same time, 50% of people believe that they’ll earn more next year (i.e. will no longer be underpaid). Apparently people think that things are generally getting better, leading to the belief that next year will be better for them.
In contrast to that optimistic view, there’s still a quarter of the workforce that is more worried about their job security going into 2014 than they were a year ago. It appears that the market is split: people are either very confident, or very not confident. The question is, why? If half of the industry thinks that things are getting better money-wise, then why are these 25% worrying about their job security so much?
I’d hazard to guess that this fear is stemming from a questioning of abilities – less than half of people feel very well educated in the technologies required for them to do their job well, according to survey results. This means that over 50% of people making cloud computing decisions do not feel adequately prepared to do their job.
Does that seem a little (or perhaps a lot) worrisome to you? Perhaps not, if you are in the minority of technologically-savvy cloud decision makers. But for the rest of you, I’d be concerned. It’s the same self-fulfilling prophecy as the one I mentioned above – if you don’t feel prepared to succeed in your role, you’re probably not going to succeed.
So in 2014, everyone is increasing their budget by 20% and increasing spending on IT – yet the guy who is making those spending decisions does not even feel well-educated on the subject. That sounds like a recipe for disaster. Playing it safe would mean shying away from these innovative technologies in favor of the tried-and-true (or is it outdated and obsolete?); but in doing so, you run the risk of becoming outdated yourself. Is there an opportunity to save yourself from the embarrassment of losing face (or worse) from the fallback of a poorly educated decision?
It’s a little unrealistic to expect a busy corporate executive to find the time for formal education in becoming a cloud expert, but there are shortcuts out there that remove the educational middleman. They come in the form of tools that will inadvertently provide one with the necessary confidence for proper decision-making. These tools collect and analyze data from all of the different aspects of infrastructure and spit out the results in a way that’s understandable even without an engineering or computer science degree. These easy-to-use managing platforms (like ScienceLogic) removes the confusion and help you wrap your ahead around a system you may not understand anything about. In other words, it takes you from inadequacy to empowered and ready for more innovative work.
Now when the next big technology arrives, you’re no longer jumping off of a cliff into the unknown – rather, you’re just jumping off a diving board into the pool. You may not be sure whether the water will be icy cold or a balmy 80 degrees, but you can rest easy knowing that you aren’t going to hit your head in a shallow rocky bottom. All it takes is a tool that does the educated part and spits out the information you need for the comfortable and confident part, that in turn makes operating in the cloud – or any new technology area – a more confidence inspiring experience.
So here is your final leaving point – you don’t have to be bold enough to jump off a cliff, but you should be staking out the area. The time will come when you need to be in that water, but it will be a much easier transition if you’ve already discovered a much lower diving board.
Tagged with: 2014 Predictions
, Antonio Piraino
, budget increases
, cloud computing
, cloud predictions
, IT budget
, IT spending
, personal salary
, ScienceLogic Survey
Happy Holidays Everyone!
As this year comes to a close, it is a fitting time to recap the year and what is now a monumental 10 years since starting ScienceLogic. Looking back, it has been an incredible journey thus far, but as incredible as it may seem, it still feels as if we are just getting started on the next and most important leg of our continued business acceleration. As I was performing as Santa Claus for the 5th year in a row in our ScienceLogic Gift Exchange & Cookie Swap Holiday event last week, it reminded me how engaged, inspired and aligned our entire ScienceLogic team is today. As we attack an ever evolving set of business, customers, and product goals we continued to be energized by the belief that we are building the industry’s best fully integrated monitoring/management platform. From the very inception of ScienceLogic 10 years ago, the thousands of interesting use case scenarios embedded in the product have been bound by the common thread of building a technology that improves the lives of our customers.
As we head into 2014, the IT industry is undergoing a tectonic shift towards cloud computing. As storage, server, horizontally scaling apps, SDN, security, and the Software Designed Datacenter become more inextricably connected and integrated, the demands for IT to be proactive is more complex than any other moment in time. So my recommendation is to stop worrying about whether you have a true cloud and focus on extending your monitoring/management tools across internal and external compute/application infrastructure so you achieve the cost, resiliency, and capacity benefits that seizes upon the promise of cloud transformations.
The best is yet to come from ScienceLogic as we head into the New Year with the greatest momentum in our storied history. We will continue to align our core strategic and tactical initiatives with our goal to deliver spectacular business outcomes for our customers. From the entire ScienceLogic team we wish you a blessed holiday and a thriving New Year!
Tagged with: happy holidays
, Network Monitoring
Last week, Gartner offered us a free ticket to their Data Center conference in Las Vegas. Who can pass up a free ticket to a major Gartner event, right?! The fact that it was in Las Vegas also added some enticement, as I needed to cash out some of my old gambling investments from the past. After the event, as I was flying back home, and thinking about the trip, two things were fresh in my mind. First, only losing a crisp $10 bill at the slots, and not the usual amounts made me feel pretty good. Second, some of the most interesting things I learned after meeting with analysts, vendors, and attending their sessions; here are the top 5 most interesting items from my many pages of notes.
1. People still get locked into the Big Four.
According to a poll taken at the conference on current and desired future monitoring platforms, 40% of the Big Four’s current customers attending the conference want OUT (in other words, they would like to use other options in the future). In contrast, all of the other vendor options increased in the percentage of attendees who would like to use them in the future compared to the percentage currently using them. This is just further proof that the Big Four is still locking their customers in while their customers want to get out.
2. The future is all about custom monitoring
According to one of the analyst sessions I attended, although the underlying basis of IT infrastructure may be the same for most organizations, each organization at the business level is different. Therefore, while we all work in differing environments, custom monitoring, which can include custom dashboards, custom analysis and data collection, becomes an important necessity. I was pleased to hear this, because ScienceLogic is focused on making it super easy to create custom dashboards and reports, as well as building integrations with other data sources via our RestFul API. And if one was looking into Business Service Management (BSM) as a way to manage their IT, then the ScienceLogic customer analytics, grouping, and views is a perfect fit.
3. Agentless monitoring is still king
The new focus in monitoring for Gartner is ‘Availability Monitoring’ and one of the key ingredients for this is Agentless technologies. The main benefits of agentless technology is that it removes the requirement to install, update, and maintain additional software on every computer from which data collection is needed – essentially, it is a huge boost in efficiency. I was glad to hear this too, because ScienceLogic has been agentless from day one and will always be a single platform for monitoring infrastructure, the cloud, and ‘the internet of things’ – end-to-end in a single dashboard.
4. Monitoring the cloud, from the cloud
A number of companies are now providing software services from the cloud – ServiceNow, NewRelic, and even ScienceLogic. It’s amazing how just a few years ago there was no IT organization that would even consider hosting their internal IT data in the cloud; I guess they’ve realized that if Salesforce.com can store company revenue, customer and client data in the cloud, then IT data can also live in the cloud. My guess is that there will come a day when pretty much every IT tool will be cloud based because it makes it easier, less expensive, and the IT staff don’t have to deal with managing in-house software.
5. Application Performance Monitoring (APM) is still very much needed and required by Gartner clients
In the midst of all these shifts toward directions of the future, it’s important not to forget about the ‘tried and true’ directions- APM. Today, there is still much more money spent on APM than on Service View (i.e. BSM), and Service Management. While clients may be asking Gartner for Service Views, at the end of the day they still want to monitor applications that support a particular business service. So next time you you see a Gartner report about BSM, it most likely will fit your APM needs just as well. As for ScienceLogic and APM – yes, we’ve got that too!
Tagged with: APM
, Big Four vendors
, cloud computing
, custom monitoring
, Gartner Data Conference
, Gartner Data Conference 2013
, Gartner DC
, Gartner DC 2013
, Las Vegas
, monitoring vendors
Google has announced that its infrastructure as a service (IaaS) offering, Google Compute Engine (GCE), is finally ready for full launch.
The company first made news about the cloud service over 18 months ago. The question was always “What was Google waiting for?” According to Google what they were really waiting for was testing to ensure they wouldn’t take some of the beatings that Amazon Web Services and Microsoft Azure have for SLAs and outages. Google Claims:
“Google Compute Engine is Generally Available (GA), offering virtual machines that are performant, scalable, reliable, and offer industry-leading security features like encryption of data at rest. Compute Engine is available with 24/7 support and a 99.95% monthly SLA for your mission-critical workloads. We are also introducing several new features and lower prices for persistent disks and popular compute instances.”
Among other things, Google Compute Engine now supports most popular Linux distributions, transparent maintenance with live migration and automatic restarts. They have increased the core count up to 16 cores per instance and claim their persistent disk service provides consistent performance along with much higher durability than local disks.
Google also has got on board a good number of customers to provide some validation of the service. “In the past few months, customers like Snapchat, Cooladata, Mendelics, Evite and Wix have built complex systems on Compute Engine and partners like SaltStack, Wowza, Rightscale, Qubole, Red Hat, SUSE, and Scalr have joined our Cloud Platform Partner Program, with new integrations with Compute Engine.” According to Google’s website.
The Google Platform has the following services:
With over 1000 Google engineers working on Google Cloud you can expect some continued updates and new feature sets. A lot of the industry has wondered if Google can catch up with AWS, Microsoft, RackSpace, and others. I would agree that new companies trying to compete with the big cloud providers is somewhat of a lost cause. However, if anyone can really push the competition and come in late to the fight, its Google. I have spent the last couple days playing with Google cloud and like many of Google’s other products they have done a very nice job of connecting all things Google. For example the sign up process was painless. Google has integrated Google+ and all other Google services for a transparent and seamless experience within Google. I was pleasantly surprise how nice of an experience it was.
According to Barak Regev, Head of EMEA Cloud Platform at Google:
“GCE is the first major milestone, but there’s more to come,” said Regev. “For example, we’re heavily innovating around big data and PaaS, too. Eventually, I think we’ll integrate PaaS, IaaS and big data into one beautiful solution.”
According to ComputerWeekly.com “And as for the accusation that its products are too “vanilla”, Regev said: “Many cloud providers offer a lot of variations of their solution, but we hear plenty of feedback about reliability and I believe our story is compelling in terms of providing that consistent performance. I predict that will result in an amazing uptake of our platform by many customers – be they startups, bricks-and-mortar enterprises or individual developers.”
It will face a tough fight along the way, but for customers that should be good news. It suggests prices will continue to fall and services will keep improving across the public cloud market.
Tagged with: Amazon Web Services
, cloud providers
, Google Compute Engine
What is PowerShell?
Microsoft describes PowerShell as a task-based command-line shell and scripting language designed especially for system administration. Ed Wilson, known as the Microsoft Scripting Guy, referred to PowerShell a few years ago during his TechNet Webcast entitled PowerShell Essentials for the Busy Admin as Microsoft’s management direction for the future. PowerShell is Microsoft’s preferred management interface and has been rolled into almost every Microsoft operating system and application over the last few years. And, Microsoft continues to invest in PowerShell. They announced at TechEd this summer PowerShell 4.0 as part of the new Windows Management Framework 4.0, which was released in conjunction with Windows Server 2012 R2 and Windows 8.1 on October 18th.
PowerShell provides unprecedented flexibility to the Microsoft world. Unix and Linux administrators have been scripting management tasks for years. And, PowerShell brings a similar capability to the Microsoft Administrator. At ScienceLogic, we use it for tasks like collection and discovery. Using cmdlets like Get-Counter, we gather numerous performance counters for Windows Server and applications like Exchange Server and Lync Server. We also use cmdlets like Get-CsTopology to discover the server roles present in a Lync Server implementation.
PowerShell combined with Windows Remote Management (WinRM), both components of the Windows Management Framework, also provides some security benefits over WMI and DCOM. WinRM is Microsoft’s implementation of the WS-Management Protocol, which uses Simple Object Access Protocol (SOAP) over HTTP and HTTPS, is considered to be more security friendly than WMI.
WinRM also enables PowerShell Remoting or the remote execution of PowerShell commands. This allows you to establish a connection to one computer and then execute commands on many remote computers. This is especially beneficial to our Managed Service Providers who provide remote managed services as they often do not have direct connections to every computer behind a customer’s firewall. Enterprise customers can also benefit from PowerShell Remoting to address the problems associated with duplicate private IP space; a problem common in the Enterprise after mergers and acquisitions.
Like it or not, PowerShell is here to stay; at least for now. To help you learn PowerShell, Microsoft has some great free resources available to help:
TechNet Webcast: PowerShell Essentials for the Busy Admin
PowerShell Week: Learn It Now Before It’s an Emergency
Tagged with: microsoft
, Windows Powershell Monitoring
, Windows Remote Management
Cisco FabricPath (introduced in 2010) is a neat technology that enables support for highly scalable low latency networks. You might guess that FabricPath would be at the forefront of data centers, but you’d be wrong! Last week I had the pleasure of presenting at ScienceLogic’s Customer Symposium where I discussed EM7’s latest support for the Cisco Nexus Product Line, and learned that none of our customers are actually using FabricPath at all! FabricPath is one of the technologies implemented on Cisco’s Nexus product line that addresses some of the challenges in the data center market. The ScienceLogic network monitoring software now supports the monitoring of this technology and all the other technologies specific to the data center. The data center has some key challenges such as:
- Move from siloed architecture to a fully virtualized unified data center
- Unification of network fabric (IP and storage network)
- VM Mobility – Workload moved from static resource on a fixed device to a mobile resource that can be migrated to VMs on same POD, to machines within that same data center and to machines across data centers
- On Demand Computing – resources are added and deleted from workloads as needed
- Cloud Computing – Enterprises can leverage data centers in the cloud
In order to address these challenges, Cisco introduced many new technologies on the Nexus product line including:
- Fiber Channel Over Ethernet (and the DCB extensions required to support FCoE)
- Virtual Port Channel
- Virtual Fiber Channels
- Overlay transport Virtualization
- Fabric Path
- Fabric Extenders
ScienceLogic currently supports monitoring of basic switching platforms. However, ScienceLogic now provides the most comprehensive monitoring of the Nexus platforms by providing the capability to monitor these additional technologies needed in the data center.
For example, Virtual Port Channels (vPC) provide the capability to split a port channel from a server to a pair of Nexus thereby utilizing all the uplinks in an active fashion while providing fault tolerant operation. vPCs are the cornerstone when connecting anything to Nexus as per Cisco Best Practices. The ScienceLogic platform is the only platform to provide the capability to visualize how your traffic is being distributed across all the members of the vPC as shown below:
Nexus supports Fabric Extenders which are really just switches themselves that are managed via the parent Nexus. That is, FEXs look like remote line cards on the parent Nexus. The ScienceLogic platform leverages component mapping to show these components of the FEX as follows:
The ScienceLogic platform provides comprehensive configuration information for these various technologies and provides event policies to indicate any potential trouble with these technologies.
While our customers may not currently be using Cisco FabricPath, they are most definitely using most of the Nexus Capabilities and the ScienceLogic platform provides monitoring support for all these technologies.
Tagged with: Cisco
, data center
, network monitoring software
As product managers we are charged with understanding the market. We search for insight into problems. Problems we hope the products we help develop will solve. For many of us, our customers are the market we serve, or at least represent a cross section of it. Solving problems for them also means solving problems for the market. So, how do you go about collecting this valuable insight? How do you go about understanding your customer’s current reality?
Last week at ScienceLogic we held our annual Customer Symposium; an event where we invite our customers to participate in two intense days of knowledge sharing and interaction with all sort of people throughout the ScienceLogic organization. Our CEO, Dave Link, kicked off the event by reminding everyone in the room– we are here because of you – the customer! I am fortunate to work for a company who is concerned with our customers. That is one on the things I liked about ScienceLogic many years ago when I too was a customer. And at ScienceLogic, we are all fortunate to have customers that are willing to share. Share their time and their insight so that we can better understand their problems and our market.
I had many chances during the two-day event to discuss the projects I am currently working on with the people who will actually be using them. It was rewarding to receive confirmation that we were actually working to solve problems that are important to our customers and the market we serve. I also received numerous ideas of new problems they hope we can help them solve or enhancements we can incorporate into future releases. Getting feedback throughout the product lifecycle means that we can deliver better product and more value. For that, I thank all of you who shared your insight with me.
During this event, we launched the new section of our customer portal called Answers, which is a “Stack Exchange” style forum. Here customers can post questions, answers and comment any time day or night. This forum is visible to all our customers, but it will also be monitored by many of us here at ScienceLogic, including the product management team. Answers provides one more way for us to interact with our customers. Ask questions, answer questions and comment on posts. Vote for questions and answers you feel have value. If you are a ScienceLogic customer, I encourage you to post your questions and feedback. And, I hope you will share some of your knowledge and experience with other users in the ScienceLogic community just like you shared your insight with me last week.
To our customers, thanks again for attending symposium and for helping me to better understand your current reality!
Tagged with: ScienceLogic Customer Symposium
Picking up where we left off yesterday, VMware offered up a new Hybrid Cloud Service offering in March of this year. This offering allows VMware customers to migrate their cloud services to a public cloud, saving money in the move through shared equipment. The idea is to have common management, orchestration and security models without changing existing apps. However, this solution does not support OpenStack – making the ability to switch cloud providers difficult. (Remember, this is in context of the company’s acquisition if Nicira which made it a contributing OpenStack member). Part of the reason for this contention is that OpenStack is effectively doing what VMware first did – creating an abstraction layer above the infrastructure – of which it believes ESX to be a core component.
Soon after that announcement, the company, along with mothership EMC, spun off a new platform called Pivotal. Pivotal brings several teams and technologies from both VMware and EMC — including Greenplum’s Hadoop (now Pivotal HD), Greenplum Database (fused with Hadoop as a new database known as HAWQ), CETAS, Pivotal Labs, Gemfire in-memory database, the Spring Application Framework and the Cloud Foundry PaaS platform. The goal of the platform is to enable the new wave of predictive big data applications, with VMware playing the role of infrastructure provider to this platform.
The Inside Scoop
Since making the announcement, VMware has taken the stance that its hybrid cloud services will target the middle to higher tier of services; no more, no less. And their SLAs are expected to remain at this tier. The question is whether starting so far behind the curve with respect to an IaaS offering can really have an appeal to an audience that has already been conditioned to AWS-like offerings in the market. Perhaps it’s no coincidence then that VMware has partnered up with Savvis, who itself is in a battle against the managed service providers like Rackspace – and the combined ecosystems could benefit both organizations.
AWS vs VMware
While AWS is pushing its users from the public cloud to draw in a hybrid usage model, VMware is effectively coming at this from the other side: pushing the enterprise to the cloud. In other words, it’s telling its customers that it will take any apps they have and move them to the cloud – keeping it firmly on their own virtual infrastructure. AWS will make your app cloud ready and has very much of a utility/volume route in mind. VMware is not inclined to compete in the utility game because of the license overhead with which it has done so well in legacy infrastructure in the enterprise. The play is to keep embracing and dominating the enterprise space, while giving itself every opportunity to dip into the AWS party.
The fine line dance:
In essence, this vHCS announcement is VMware’s prevention of blood-loss to AWS and Microsoft Azure. However, with 4,000 participants in its VMware Service Provider Program (VSPP) representing about 85% of its revenue (via distribution), VMware is going to have to make these providers very comfortable with the new service. And this is the same game that Microsoft has been playing for years, increasingly marginalizing some providers, and forcing the more assertive ones to move on to higher level value added services. It remains to be seen how this segment of the market will temper the strain of a vendor siting on their customer’s doorsteps with increasingly cloudy (service) capabilities.
From a ScienceLogic perspective – we won’t ever undercut our partners, nor enter the managed services space. We’ve got you covered no matter what your technology of choice
Tagged with: hybrid cloud
, Hybrid Cloud Services
I recently watched a VMware representative being interviewed on Silicon Insider, and it quickly became obvious that VMware is in somewhat of an uncomfortable position when it comes to proclaiming their leadership in the cloud. For those unquestioning enterprises, the line is a truism in that VMware dominates (65% market share) in the large enterprise IT environments; and, virtualization is the major tipping point for what today underlies numerous cloud platforms (but not the majority, since Xen is still the popular default choice for those with less cash in their pockets). However, for those of us that have known VMware as a software technology provider, abstracting away the management of underlying infrastructure to the enterprise, and not a cloud platform service provider, their edict of cloud leader is purposefully ambiguous.
How much cloud does VMware actually do?
The company held its annual VMworld event not too long ago, with the key theme of the conference being centered around software defined data centers; with the two sub categories of networking and storage being top of mind, closely followed by compute and associated management platforms. The objective for VMware is to have everyone virtualize the rest of the data center network and IT components, and build a structure to support hybrid environments. Those hybrid environments are not necessarily on-premise, vs. off-premise 3rd party data center deployments. Rather, this refers to the mish-mash of storage devices and management techniques used to manage that storage; for example, that needs a single control plane. Offering the market more intricate technology is what VMware does well, but that’s a different task to actually operating a cloud platform.
With the big announcement of VMware’s Hybrid Cloud Service (vHCS), questions moved quickly to the commercial aspects of this fantastic announcement. How are instances obtained on this platform? How does the pricing work? What SLAs will VMware commit to – they have stated best of breed SLAs. And that’s no simple task given the number of cloud providers out there. According to 451 Research, there are about 250+ vCloud IaaS providers currently active. No wonder then that VMware has hedged its bet and expanded partnership options for its vCloud Hybrid Cloud Service (vHCS) to include a “franchise” option.
Although Service Providers very much played second fiddle to everyone else in VMware’s world for numerous years, the company has slowly been pandering to the fastest growing consumers of IT infrastructure. vHCS is being operated out of 3rd party data centers (read Savvis) in San Diego and Sterling, Virginia, by hosting operators for VMware and tying the live services to VMware’s portal for users. Not for the first time Savvis is the guinea pig for such a franchised service in the hopes that deployments will move to more Savvis facilities.
Is it really new?
Aside from the fact that the new vHCS is missing things like object storage, is this really a new concept from VMware? It was just a couple of years ago, in 2011, that the company launched its Cloud Foundry service – an open source project that made use of numerous different clouds, developer frameworks and application services to provide an open platform as a service (PaaS). The idea was to allow easy deployment of applications written using Spring, Rails and other modern frameworks. And at VMworld, VMware ironically continued to push the concept of OpenStack support (from a Cloud Foundry perspective), although CEO Pat Gelsinger doesn’t expect it to catch on in the enterprise space so much as the MSP space – an area that VMware wants to penetrate further.
Tomorrow, in part two of this blog post, we’ll fast forward to March of this year, when VMware offered up a new Hybrid Cloud Service offering, allowing its customers to migrate their cloud services to a public cloud, saving money in the move through shared equipment.
Tagged with: hybrid cloud
, Hybrid Cloud Services