Converged Infrastructure: Bringing Maturity to the Adolescent Cloud

January 29th, 2015 by

Expectations Drive Redefinition of IT Infrastructure

Back in the nineties, John. C. Dvorak had a radio show about Tech called “Software/Hardtalk”. His tag line was (I paraphrase) “Remember… whatever I told you this week, will be null and void by this time next week.”  How true that is! It seems all the assumptions underlying how we deliver and consume IT are in flux.

People expect to get the best information whenever and however they want. Given those expectations, how do we build a new, more liquid, data infrastructure to deliver that experience? Changes are required and they are driving a reexamination and redefinition of IT infrastructure.

There has been an almost 50% increase in worker productivity in the United States work force since 1990. Much of that is a product of enhanced IT. However, while IT has risen to the fore, it is now a victim of its own success. Well executed IT is now seen as a competitive advantage for businesses. So, IT must innovate to keep pace with rising expectations, growing volumes of data, and lower cost while being told to enrich the services it delivers.

Standardize and Scale

Just as custom car manufacturing gave way to the Ford, IT infrastructure must be tooled up for mass production.  The same rules apply: standardize and scale. Like an assembly line, make IT more:

  • Manageable
  • Scalable
  • Efficient
  • Cost-effective
  • Interchangeable
  • Consumable

To achieve these goals, practitioners have been squeezing uniqueness and complexity out of IT for some time. The evolutionary chain began with dedicated application infrastructure. Hardware was costly and difficult to manage, but it was also integral to that application.

Then came virtualization, which detached the application for the hardware and, all resources could be placed in undifferentiated pools. These pools became “clouds”, and suddenly, why did the app ever need to run in your data center and on your hardware, at all?! Move everything to the cloud! Not so fast! 

The Cloud is Young and Awkward

While cloud seems like a great way to relieve some of that pressure to adapt, the transition to “service delivery” and  “cloud” will be as awkward as any puberty you’ve ever seen. Cloud technology is quite young and it doesn’t know what it needs to be just yet. But, it’s learning. Even though some applications are ready for the cloud, many are not. And, conversely, the cloud is just not ready for some applications. The situation is as awkward as an eighth-grade dance.

Barring the emergence of sudden, miraculous maturity in cloud technology, how can an organization relieve some of the pressure to increase IT efficiency? By embracing the concept of converged infrastructure!

What are the Benefits?

Typical converged infrastructure consists of compute, networking and storage hardware. They are integrated and certified to operate as a single, reliable, predictable unit of data infrastructure. Extending or upgrading capacity becomes as simple as adding additional units, like adding “bricks” or “modules” to a structure.  As these modules are engineered with a specific operational envelope in mind, they scale at a predictable rate. So, you know when you are approaching the limits of your resources and need to add another brick.

Because each unit of converged infrastructure incorporates a known set of hardware and software components, the cost of each unit is well understood. This predictability of the cost of converged infrastructure allows organizations to normalize and manage their IT budgeting, and add units of capacity when they are needed, and no sooner.

Improve Maturity and Increase ROI

Once you deploy converged infrastructure, the benefits of the economies of scale will kick in. The management tools used to deploy and manage your first module of converged infrastructure will be able to manage each subsequent unit you add. So, you get to keep the tools you choose and the learning curve flattens out for your IT staff. You can standardize your processes, standardize your training and on-boarding, achieve faster time-to-resolution for issues, and increase your ROI.

There’s much more to converged infrastructure and a whole set of benefits that are not included here. However, if you are feeling the pressure to improve IT efficiency while you wait for the cloud to mature, take a closer look at converged infrastructure. It will save you money and buy you time.

Already using a converged infrastructure system? ScienceLogic provides total monitoring for FlexPod and Vblock, powered by CloudMapper.

Tagged with: , , ,

Add comment

Remembering Don Pyle

January 22nd, 2015 by

It is with a heavy heart that we mourn the loss of our dear colleague, stellar friend, mentor to many, and most of all simply an excellent man: Don Pyle. Don was a man molded according to an unusual pattern; with unique qualities of character; that made him singularly wholesome and appealing.

Personally, Don was widely known for his vivacious spirit, thirst for life, and love of family.  A philanthropist, avid fisherman, and doting grandfather, Don found true happiness when doing for others and surrounded by people he loved.

Don Pyle at a recent ScienceLogic team building activityProfessionally, Don was renowned for his incredible track record in the tech industry.  Humble and brilliant, Don possessed a special leadership quality that lifted others to seek accomplishments beyond what they originally thought possible. Every business in which he invested his time and energy was hugely successful.

We were delighted to welcome Don to ScienceLogic in September 2014 as our Chief Operating Officer. And though his time working at the company was short, our lives are all richer on account of having known and worked with him.

While the sudden loss of Don and his family is a blow to the ScienceLogic team, we know that we are not alone in grieving the passing of a great, irreplaceable man. To those hundreds around the globe that have directly reached out to the ScienceLogic team to offer condolences, we thank you from the bottom of our hearts.

Our core thoughts and prayers are primarily with Don, Sandy, their grandchildren and those remaining family members. We ask that you please respect the privacy of Don’s family during this difficult time, as they have lost more than we can ever imagine.

If you have any quotes or remembrances of Don, please send them to the following email address: rememberingDonPyle@gmail.com. We may publish them in a compilation post and share with the family. If you would like to remain anonymous, simply sign your email as anonymous.

Tagged with:

Add comment

Can I Autodiscover My New Home?

December 22nd, 2014 by

I made a big change during the summer of 2014. I packed up my family and finally moved to Austin, TX. Now, I’d like to talk about how awesome Austin (and the BBQ) is, but I’ll save that for later. Having been in the software technology field for a while, drawing parallels between ScienceLogic’s software and real-life experiences is practically an unconscious activity for me – which leads us back to my big move to Austin, TX.

Moving halfway across the country in two vehicles toting three kids, two dogs and a boat is about as much of an adventure as you might imagine. Thankfully, our marathon road trip from San Diego to Austin only lasted two very long days.

I knew this move would require a lot of work once we arrived. What has really proven to be a tremendous amount of work is setting up all the technology at the new homestead in my limited amount of spare time on evenings and weekends. As I was pulling Cat5e wiring, manually punching down jacks and crimping wire for all my tech goodies, it hit me… Autodiscovery!

You see, my previous home was wired up and dialed to the finest detail. And now I had to start over from scratch. Every wire, every wall plate, every plug and connection had to be tediously configured. I found myself wishing I could press the discovery button and have everything mapped out the way it was at my old house, similar to how VMware, AWS, NetApp, etc. gets automatically mapped out within EM7. Just one click and an entire AWS account or vCenter is automatically connected and relationships are mapped.

Unfortunately, wiring my new house does not fall under the solutions that ScienceLogic offers. I still need to wire up the stereo, and then make a secret decoder for the family to explain: Roku is set to Video1, the cable box to CBL, Soundbridge to Video2, the PS3 to DVD, etc. It goes on and on. I have weeks of work before my tech will be up and running like it used to be (and the family will likely be just as confused as before).

Oh, how I wish I had CloudMapper for the house. With a simple click, I could see how the family room TV, cable box, stereo, Roku and the remotes were magically connected and controlled one another. Then repeat for bedrooms, and the video gaming rig. You get the picture, right?

Doing this all manually is madness! There are just too many moving parts in a modern connected home these days. If only, with the click of a button, I could have all my technologies mapped out, automatically showing all relationships between different system components that everyone could see visually for instant understanding! That would make moving a breeze.

IMG_0899

Tagged with:

Add comment

The Spirit Of Giving

December 18th, 2014 by

Imagine yourself swallowed in a sea of people who are all volunteering their time to help honor and remember America’s fallen heroes. That is how I spent this past Saturday with my son, Eric, and my wife, Anne.

We woke up at 5 am and hustled down to the Arlington National Cemetery to join over 20,000 others who had chosen to spend this beautifully crisp, twenty-six degree morning volunteering with Wreaths Across America, which has become a fabulous US tradition.

The handmade wreaths arrived in a caravan of 75 trucks, driven from Maine to Virginia. Our focus was to take on one of these trucks and help decorate the 200,000+ tombstones with festive remembrance. We captured the holiday spirit as we helped hand out over 3,500 wreaths to a swarm of volunteers. Some of the volunteers distributed wreaths on behalf of fallen family members; others told us they were drawn to this event to honor our service men and women who gave the ultimate sacrifice. As we spoke to a truck driver who drove down the wreaths, complete with a police escort, she was overcome with emotion. It was clear just how much this mean to her, with flags waving and spectators on bridges to view the caravan throughout the 800+ mile trip.


As so many people gathered together for such a great cause, it sparked my inner engine of volunteerism and reminded me of how this spirit is alive and well within ScienceLogic.

One of the things I am honored and humbled by is the effort the great people I work with put forth every year, to make the lives of others better. The ScienceLogic team continues to outdo themselves in philanthropic efforts. The team’s donation to “Food for Others” during Thanksgiving and Christmas has never been as large as the 2014 donation. In addition, ScienceLogic’s Toys for Tots contributions overflowed unlike any year prior!

The ScienceLogic team also supports the American Red Cross throughout the year by hosting two blood drives, and monetary donations for many of the national disasters that have devastated areas of the world. Through ScienceLogic’s global team, we are able to find a way each and every week to give back – and that is what makes the world a better place.

The spirit of volunteerism and philanthropy is infectious throughout our headquarters. I can’t wait to see what the ScienceLogic team will accomplish next year!

Dave
Co-Founder & CEO 

Add comment

AWS Re:Invent 2014 and Top 5 AWS predictions

December 10th, 2014 by

This years AWS Re:Invent has come and gone.  Compared to many conferences I have attended in the past the one thing AWS Re:Invent does a great job is announcements.  This year was no exception.  Instead of doing a recap of all the new announcements (which has been covered by AWS and others) I thought it better to focus on what is next/ 2015 predictions for AWS.

1. Obvious one. More AWS Services:

Although it may seems obvious that AWS will continue to add more and more services, I call it out because of the fact that compared to any other cloud provider or frankly most any other software/technology company AWS embarrass the rest with the speed, diversity and innovation in this space by large margin. This in itself is one thing that AWS has to continue to do if they are going to be the leader and keep market share.  Expect more AWS.

Gartner highlights their market dominance with their recent Magic Quadrant covering the IAAS space. AWS provides access to that report for free from their Analyst Reports page.

2. API everything:

With more services and ways to connect to the cloud; apis have become the keystone to holding things together.  Whether it is for migrations  of new workloads or ease of deployment, or monitoring of these workloads a new set of developer tools continues to emerge all leveraging apis to make the magic happen.  AWS also has brought their own suite of developer tools to build natively on AWS bringing the development life cycle right onto the AWS  platform from start to finish. Many other cloud providers like Microsoft Azure are following suit which only means an API for everything.  I like the way Forrester Natalie Gagliordi put it.

Back-office applications will need RESTful interfaces. Developers tasked with linking together apps via APIs are going to be on the lookout for services that communicate via REST interfaces, Forrester says. But rather than waiting for REST APIs via an upgrade, companies will look to replace their enterprise service with an API management solution.”

It isn’t just AWS APIs but all apps need to have APIs if they want to survive the future.

3. More compute, storage, for less.

Normally I would say you have the product triangle to deal with. Pick 2 of the 3.

triangle

In this case with AWS you will get the “Good, Cheap” and you can wait for the speed in which you will use it.  Based on our current use, our customers use and many of the forums I have read I’d say the vast array of compute and storage services already offered aren’t fully utilized.  So Mr. & Mrs. Late Adopter, you get the best of all three with the announcement of Intel “Haswell” exclusive chip for AWS EC2 C4 instances.. The instance known as C4 will be available in five different configurations, ranging from two to 36 virtual CPU cores and from 3.75 Gigabytes to 60 Gigabytes of RAM. The C4 will be made using Intel Corps smallest-yet 22-nanometer process technology. The processor operates at a base speed of 2.9GHz, and powered with Turbo boost, it can reach up to 3.5 GHz, according to Amazon.  What that really means is more compute cycles available for a smaller price point.  This same concept rings true with storage. SSD’s  and spindals only get cheaper and cheaper with more and more capacity.  As other Clouds try to close the gap in terms of service AWS will keep providing more for less to maintain is position as well as make the financial barrier to move work loads trivial.

4. Cloud Security to continue to tighten up as more enterprises direct connect.

With SDN, data sovereignty, and cloud interoperability increasing,  more organizations will establish policy and implement technology to ensure data doesn’t leave trusted boundaries. I sat through many different AWS Re:Invent sessions about the enhanced features of VPN, Direct Connect and the blend of both to help make this happen.  The more enterprises connect into the cloud the more we’ll see security concern heighten. This next year more and more enterprises will leverage direct connect to make AWS IaaS part of there resources. Mr. & Mrs. Security Professional: I sense the rapid increase in job demand for network security professional.

5. Relationships are the key to everything.

With AWS releasing the new Config service, which adds to the Cloudwatch and Cloudtrail capabilities, there is a clear path of exposing relationships between resources.  The more you can see how things are connected together the better success you will have not only moving to the cloud but having confidence you are managing what needs to be managed from a application and service delivery view.  AWS will continue to extend the functionality of Config service and I believe they will build some reports or mapping views of these relationships.  Today in config you already have the topology tree structure built.

AWS Config Console

So now, change this a bit and provide this same data in a map view that allows you to navigate up and down the trees and see how things are connect, and we’re cooking with gas.  I can see this functionality be part of the next phase of relationships and visibility offered by AWS.  Want to get a foretaste of what that might look like? Check out MapMyCloud.

Add comment

ScienceLogic and Citrix CloudPlatform

November 21st, 2014 by

We were recently fortunate enough to be featured in Citrix’s corporate blog, in an article written by Valerie DiMartino. In the blog she does a masterful job of tying Old Bay potato chips, a major U.S. river, and ScienceLogic together. More importantly, she also talks about how ScienceLogic and Citrix’s CloudPlatorm work together to offer a slick cloud management control layer.

Citrix CloudPlatform is the industry’s only future-proofed, application-centric cloud solution proven to reliably orchestrate and provision desktop (and traditional scale-up enterprise applications), web (scale-out cloud native applications) and datacenter infrastructure workloads within a single unified cloud management platform. This turnkey solution is an agile, flexible, efficient and open cloud orchestration and provisioning platform that allows you to leverage existing virtualization and hardware investments, and is trusted to power the world’s leading clouds.

ScienceLogic’s certified integration with CloudPlatform delivers a future-proofed cloud orchestration and monitoring solution. By automatically discovering, mapping, and applying the right monitoring policy to your entire CloudPlatform infrastructure, this integration ensures your cloud deployment is a success. Together ScienceLogic and Citrix will take your private cloud to the next level.

Add comment

SSL 3.0 and the POODLE Attack

November 10th, 2014 by

Another security vulnerability has hit the web. This time, it is the POODLE attack—and, no, it is not a puffy little dog.

POODLE, which stands for “Padding Oracle On Downgraded Legacy Encryption,” is a man-in-the-middle exploit that takes advantage of a software client’s ability to fall back to the much older protocol, SSLv3 instead of using TLS, which is not affected by this vulnerably. SSLv3 is used in older browsers and servers, and this aging protocol is still seen as problematic since it is still widely supported but no longer maintained.

This new vulnerability targets clients rather than servers, as we have seen with other recently discovered attacks (Heartbleed and Shellshock). POODLE affects the SSLv3, which encrypts the communication between client and server, allowing “man-in-the middle” attacks, which enable a hacker to gain access to users’ data.

While the likelihood of this type of attack is low, the advice from Red Hat is to implement TLS exclusively in order to avoid flaws in SSLv3.

Some of the services and clients that may be affected by this vulnerability include:

  • httpd (Apache)
  • MySQL (Enterprise)
  • OpenLDAP
  • Cups
  • Tomcat
  • Firefox/Chromium
  • Dovecot/Postfix
  • Safari
  • Curl

Sounds scary, right? The problem is with mitigation. Since this is a client attack, it’s difficult to fix software over which you have no control. Within hours of the vulnerability being announced, ScienceLogic issued guidance while waiting for an official fix from CentOS (Red Hat), providing instructions on turning off SSLv3 to customers, should they consider POODLE a concern for their own deployments.

CentOS has issued updates for OpenSSL to help address the vulnerability. The updates introduce TLS Fallback Signaling Cipher Suite Value (TLS_FALLBACK_SCSV), a mechanism that will abort the connection should a client attempt to fall back to an SSL version when TSL is supported. Currently, the only browser to support this mechanism is Google’s Chromium. As other browsers are upgraded, TLS_FALLBACK_SCSV will be supported.

MySQL uses a different version of OpenSSL for its client-server connection. Oracle has issued updates to MySQL. Customers should upgrade to the ScienceLogic-provided versions of MySQL 5.5.40+, and our 7.5 customers should upgrade to MySQL 5.6.21+.

While all of this sounds scary, you can count on ScienceLogic to address security concerns as soon as they arise. We conduct regular security audits of both EM7 and the platform on which we are based. In most cases, because of EM7’s architecture, pushing these fixes out to EM7 appliances can be done quickly with little or no downtime.

Add comment

Shellshock

September 30th, 2014 by

If you haven’t been totally heads-down over the past 5 days keeping your data center running (and if you have, you need to contact us immediately ;-) ) you’ve probably heard about the latest security vulnerability involving, literally, hundreds of millions of Linux-based devices, from servers to routers to storage subsystems. Dubbed “Shellshock,” this latest vulnerability is actually a set of vulnerabilities. NIST has rated these vulnerabilities 10-out-of-10 for their impact and exploitability. If you thought Heartbleed was bad, Shellshock has the potential to be nothing short of catastrophic. The vulnerabilities affect nearly every Linux server shipped over the past 20+ years. Hype? Not really. Within 24 hours of the announcement of the vulnerability, botnet attacks were already being seen.

Bash is the affected Linux application and is the default ‘shell’ for executing scripting commands as well as the command-line interface favored by most admins. In other words, if you’ve used Linux at a command-line level, you’ve probably used Bash. Every time an application executes a set of command-line instructions, it uses Bash, and this is where the vulnerability comes in. Under the right invocation, Bash can essentially allow potentially malicious code to be executed without appropriate security validation.

The gory details are outlined by NIST:

“GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution, aka “ShellShock.”

At ScienceLogic, we routinely check for CERT and RedHat security announcements, and have a process in place to escalate these to EM7 hotfix status based on severity. In the case of Shellshock, we recognized the impact quickly, and within 24 hours of RedHat announcing the fix for CVE-2014-6271, on Wednesday, September 24th, we had a hotfix ready and available on our customer portal. We also recognized that the initial fix for Shellshock from RedHat might not be the only one needed, and we correctly anticipated a second vulnerability announcement was coming. On Friday, September 26th, we developed a patch for the second RedHat vulnerability, CVE-2014-7169, again within 24 hours of the announcement! The cumulative fix is now available.

If there’s a good news story here, it’s that you can count on ScienceLogic to ensure your EM7 deployments offer the best possible security posture. Our regular, independently-conducted penetration testing and our recent approval for JITC certification are additional evidence of that.

The architecture of EM7 also ensures any security updates are quick and seamless. Because EM7 is agentless, any patches like Shellshock are required only on EM7 itself—there are no device-dependent agents to worry about—and in the case of Shellshock, this is a big deal. If every managed device had to run an agent, there could be a required patch for every managed device. The logistics of these agent updates can be a huge time and cost issue.

What’s interesting to note here, at the end of the day, is that any software application can have latent (and in this case, extremely latent) vulnerabilities. What matters the most is that you have a company behind you that values security and customer priorities as much as you do.

Add comment

VMware and OpenStack

August 27th, 2014 by

VMware and OpenStack

There has been a lot of talk about VMware and OpenStack during the VMworld 2014 keynotes this week.  I attended a breakout session specifically about the topic to see what other details could be gathered on the integration, and how it all is supposed to work.

I found it interesting that, even in the breakout session, somewhat backhanded comments were made and negative slides were shown about OpenStack (Keynote Smack Talk). The indirect message was, “you need a army of developers to get OpenStack to work, but, no fear, VMware is here—announcing its own OpenStack distribution!”

With VMware’s distribution, the total number of major OpenStack distributions has grown to eight. For me, one of the challenges with OpenStack is everything is so piecemeal. With eight distributions and 11 components making up the stack—all at difference phases of adoption—it is painfully hard for anyone to run OpenStack in production, unless your organization has an ongoing development team keeping things in sync. VMware said that some of the eight distributions plan on self-committing VMware code changes, but I question the sustainability of this plan.

Cost and Performance

VMware spent some time comparing the performance of RedHat Storage running OpenStack to VMware’s vSan running the same. They noted how much faster and better they are than RedHat, then went on to compare cost over time:
IOs per secondSolution cost over three years

According to VMware’s study:

“In our testing, the VMware vSphere with Virtual SAN solution performed better than the Red Hat Storage solution in both real world and raw performance testing by providing 53 percent more database OPS and 159 percent more IOPS. In addition, the vSphere with Virtual SAN solution can occupy less datacenter space, which can result in lower costs associated with density. A three-year cost projection for the two solutions showed that VMware vSphere with Virtual SAN could save your business up to 26 percent in hardware and software costs when compared to the Red Hat Storage solution we tested.”

Summary

OpenStack adoption is gaining momentum, but the platform still needs to mature. I look at OpenStack like aged cheese: the longer it ages, the better it gets. VMware’s new distribution, added awareness, and increased contributions will only help OpenStack grow. However, I question if adding one more player will really help the technology age more quickly to the point it is ready to be consumed by all types of business, with or without hands-on development teams.  As it sits now, VMware has given OpenStack some backhanded compliments while still trying to tempt enterprises to consider the platform as an option when VMware is under the hood.

VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. – See more at: http://www.eweek.com/cloud/vmware-announces-its-own-openstack-distribution.html#sthash.AQAs69wQ.dpuf
VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. – See more at: http://www.eweek.com/cloud/vmware-announces-its-own-openstack-distribution.html#sthash.AQAs69wQ.dpuf

VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. – See more at: http://www.eweek.com/cloud/vmware-announces-its-own-openstack-distribution.html#sthash.AQAs69wQ.dpuf

VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. – See more at: http://www.eweek.com/cloud/vmware-announces-its-own-openstack-distribution.html#sthash.AQAs69wQ.dpufk
VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. – See more at: http://www.eweek.com/cloud/vmware-announces-its-own-openstack-distribution.html#sthash.AQAs69wQ.dpuf
VMware has been one of the top contributors to the open-source OpenStack cloud platform over the last several years, and now the company is taking the next logical step by announcing its own OpenStack distribution. – See more at: http://www.eweek.com/cloud/vmware-announces-its-own-openstack-distribution.html#sthash.AQAs69wQ.dpuf
Add comment

VMworld 2014: Day 1 Keynote

August 25th, 2014 by

vmworld no limts

22,000 attendees, representing 85 countries, are in attendance at VMworld 2014 this week. This year’s conference theme is “No Limits,” but this morning’s keynote highlighted three major topics:

  1. SDDC (Software Defined Data Center)
  2. Hybrid Cloud (Migration/Adoption)
  3. End-User Computing

In alignment with those themes came a few product releases announcements:

  • VMware NSX 6.1
  • vCloud 6.0 beta
  • Virtual Volumes & Virtual SAN 2.0 beta
  • VMware vRealize Suite

VMware CEO Pat Gelsinger spoke about being brave and embracing the “liquid” world we live in, reminding us that the status quo isn’t the status quo—Dr. Horrible rings a bell when I hear that phrase:

Pat went on to introduce the hyper-converged infrastructure portfolio, VMware EVO, with EVO:RAIL and EVO:RACK.  These offerings are really focused at providing SDDC turnkey solutions that enable enterprises to move to the cloud faster, without the expense of alternatives like FlexPod and Vblock. Pick the partner hardware of choice and use VMware to stitch it all together.

Bill Fathers of VMware’s Hybrid Cloud Services business unit continued the keynote, focusing on vCloud Air. VMware’s view is that “hybrid cloud” isn’t so much migrating to a public cloud as it is moving workloads from one VMware platform to any other VMware platform—VMware itself being “the cloud” in the mind of VMware. Continuing this message, VMware is relaunching the vCloud Air public cloud service this September in an effort to capture public cloud market share currently controlled by AWS, Azure, and others.

Look for my writeup on the session covering VMware and OpenStack coming later this week!

vmworld 2014

Add comment

Previous Posts


Our blog’s authors aren’t just experts in their field, they’re also key contributors to our world-class monitoring platform. If you’d like to see how these topics play out in a real-world setting, please register for a free, no pressure demo:

Request a demo

Search


type keywords | hit enter

Share this Page

Navigation

Recent Posts

Categories

Archives

Recent Comments

Subscribe