Google has announced that its infrastructure as a service (IaaS) offering, Google Compute Engine (GCE), is finally ready for full launch.
The company first made news about the cloud service over 18 months ago. The question was always “What was Google waiting for?” According to Google what they were really waiting for was testing to ensure they wouldn’t take some of the beatings that Amazon Web Services and Microsoft Azure have for SLAs and outages. Google Claims:
“Google Compute Engine is Generally Available (GA), offering virtual machines that are performant, scalable, reliable, and offer industry-leading security features like encryption of data at rest. Compute Engine is available with 24/7 support and a 99.95% monthly SLA for your mission-critical workloads. We are also introducing several new features and lower prices for persistent disks and popular compute instances.”
Among other things, Google Compute Engine now supports most popular Linux distributions, transparent maintenance with live migration and automatic restarts. They have increased the core count up to 16 cores per instance and claim their persistent disk service provides consistent performance along with much higher durability than local disks.
Google also has got on board a good number of customers to provide some validation of the service. “In the past few months, customers like Snapchat, Cooladata, Mendelics, Evite and Wix have built complex systems on Compute Engine and partners like SaltStack, Wowza, Rightscale, Qubole, Red Hat, SUSE, and Scalr have joined our Cloud Platform Partner Program, with new integrations with Compute Engine.” According to Google’s website.
The Google Platform has the following services:
With over 1000 Google engineers working on Google Cloud you can expect some continued updates and new feature sets. A lot of the industry has wondered if Google can catch up with AWS, Microsoft, RackSpace, and others. I would agree that new companies trying to compete with the big cloud providers is somewhat of a lost cause. However, if anyone can really push the competition and come in late to the fight, its Google. I have spent the last couple days playing with Google cloud and like many of Google’s other products they have done a very nice job of connecting all things Google. For example the sign up process was painless. Google has integrated Google+ and all other Google services for a transparent and seamless experience within Google. I was pleasantly surprise how nice of an experience it was.
According to Barak Regev, Head of EMEA Cloud Platform at Google:
“GCE is the first major milestone, but there’s more to come,” said Regev. “For example, we’re heavily innovating around big data and PaaS, too. Eventually, I think we’ll integrate PaaS, IaaS and big data into one beautiful solution.”
According to ComputerWeekly.com “And as for the accusation that its products are too “vanilla”, Regev said: “Many cloud providers offer a lot of variations of their solution, but we hear plenty of feedback about reliability and I believe our story is compelling in terms of providing that consistent performance. I predict that will result in an amazing uptake of our platform by many customers – be they startups, bricks-and-mortar enterprises or individual developers.”
It will face a tough fight along the way, but for customers that should be good news. It suggests prices will continue to fall and services will keep improving across the public cloud market.
Tagged with: Amazon Web Services
, cloud providers
, Google Compute Engine
What is PowerShell?
Microsoft describes PowerShell as a task-based command-line shell and scripting language designed especially for system administration. Ed Wilson, known as the Microsoft Scripting Guy, referred to PowerShell a few years ago during his TechNet Webcast entitled PowerShell Essentials for the Busy Admin as Microsoft’s management direction for the future. PowerShell is Microsoft’s preferred management interface and has been rolled into almost every Microsoft operating system and application over the last few years. And, Microsoft continues to invest in PowerShell. They announced at TechEd this summer PowerShell 4.0 as part of the new Windows Management Framework 4.0, which was released in conjunction with Windows Server 2012 R2 and Windows 8.1 on October 18th.
PowerShell provides unprecedented flexibility to the Microsoft world. Unix and Linux administrators have been scripting management tasks for years. And, PowerShell brings a similar capability to the Microsoft Administrator. At ScienceLogic, we use it for tasks like collection and discovery. Using cmdlets like Get-Counter, we gather numerous performance counters for Windows Server and applications like Exchange Server and Lync Server. We also use cmdlets like Get-CsTopology to discover the server roles present in a Lync Server implementation.
PowerShell combined with Windows Remote Management (WinRM), both components of the Windows Management Framework, also provides some security benefits over WMI and DCOM. WinRM is Microsoft’s implementation of the WS-Management Protocol, which uses Simple Object Access Protocol (SOAP) over HTTP and HTTPS, is considered to be more security friendly than WMI.
WinRM also enables PowerShell Remoting or the remote execution of PowerShell commands. This allows you to establish a connection to one computer and then execute commands on many remote computers. This is especially beneficial to our Managed Service Providers who provide remote managed services as they often do not have direct connections to every computer behind a customer’s firewall. Enterprise customers can also benefit from PowerShell Remoting to address the problems associated with duplicate private IP space; a problem common in the Enterprise after mergers and acquisitions.
Like it or not, PowerShell is here to stay; at least for now. To help you learn PowerShell, Microsoft has some great free resources available to help:
TechNet Webcast: PowerShell Essentials for the Busy Admin
PowerShell Week: Learn It Now Before It’s an Emergency
Tagged with: microsoft
, Windows Powershell Monitoring
, Windows Remote Management
Cisco FabricPath (introduced in 2010) is a neat technology that enables support for highly scalable low latency networks. You might guess that FabricPath would be at the forefront of data centers, but you’d be wrong! Last week I had the pleasure of presenting at ScienceLogic’s Customer Symposium where I discussed EM7’s latest support for the Cisco Nexus Product Line, and learned that none of our customers are actually using FabricPath at all! FabricPath is one of the technologies implemented on Cisco’s Nexus product line that addresses some of the challenges in the data center market. The ScienceLogic network monitoring software now supports the monitoring of this technology and all the other technologies specific to the data center. The data center has some key challenges such as:
- Move from siloed architecture to a fully virtualized unified data center
- Unification of network fabric (IP and storage network)
- VM Mobility – Workload moved from static resource on a fixed device to a mobile resource that can be migrated to VMs on same POD, to machines within that same data center and to machines across data centers
- On Demand Computing – resources are added and deleted from workloads as needed
- Cloud Computing – Enterprises can leverage data centers in the cloud
In order to address these challenges, Cisco introduced many new technologies on the Nexus product line including:
- Fiber Channel Over Ethernet (and the DCB extensions required to support FCoE)
- Virtual Port Channel
- Virtual Fiber Channels
- Overlay transport Virtualization
- Fabric Path
- Fabric Extenders
ScienceLogic currently supports monitoring of basic switching platforms. However, ScienceLogic now provides the most comprehensive monitoring of the Nexus platforms by providing the capability to monitor these additional technologies needed in the data center.
For example, Virtual Port Channels (vPC) provide the capability to split a port channel from a server to a pair of Nexus thereby utilizing all the uplinks in an active fashion while providing fault tolerant operation. vPCs are the cornerstone when connecting anything to Nexus as per Cisco Best Practices. The ScienceLogic platform is the only platform to provide the capability to visualize how your traffic is being distributed across all the members of the vPC as shown below:
Nexus supports Fabric Extenders which are really just switches themselves that are managed via the parent Nexus. That is, FEXs look like remote line cards on the parent Nexus. The ScienceLogic platform leverages component mapping to show these components of the FEX as follows:
The ScienceLogic platform provides comprehensive configuration information for these various technologies and provides event policies to indicate any potential trouble with these technologies.
While our customers may not currently be using Cisco FabricPath, they are most definitely using most of the Nexus Capabilities and the ScienceLogic platform provides monitoring support for all these technologies.
Tagged with: Cisco
, data center
, network monitoring software
As product managers we are charged with understanding the market. We search for insight into problems. Problems we hope the products we help develop will solve. For many of us, our customers are the market we serve, or at least represent a cross section of it. Solving problems for them also means solving problems for the market. So, how do you go about collecting this valuable insight? How do you go about understanding your customer’s current reality?
Last week at ScienceLogic we held our annual Customer Symposium; an event where we invite our customers to participate in two intense days of knowledge sharing and interaction with all sort of people throughout the ScienceLogic organization. Our CEO, Dave Link, kicked off the event by reminding everyone in the room– we are here because of you – the customer! I am fortunate to work for a company who is concerned with our customers. That is one on the things I liked about ScienceLogic many years ago when I too was a customer. And at ScienceLogic, we are all fortunate to have customers that are willing to share. Share their time and their insight so that we can better understand their problems and our market.
I had many chances during the two-day event to discuss the projects I am currently working on with the people who will actually be using them. It was rewarding to receive confirmation that we were actually working to solve problems that are important to our customers and the market we serve. I also received numerous ideas of new problems they hope we can help them solve or enhancements we can incorporate into future releases. Getting feedback throughout the product lifecycle means that we can deliver better product and more value. For that, I thank all of you who shared your insight with me.
During this event, we launched the new section of our customer portal called Answers, which is a “Stack Exchange” style forum. Here customers can post questions, answers and comment any time day or night. This forum is visible to all our customers, but it will also be monitored by many of us here at ScienceLogic, including the product management team. Answers provides one more way for us to interact with our customers. Ask questions, answer questions and comment on posts. Vote for questions and answers you feel have value. If you are a ScienceLogic customer, I encourage you to post your questions and feedback. And, I hope you will share some of your knowledge and experience with other users in the ScienceLogic community just like you shared your insight with me last week.
To our customers, thanks again for attending symposium and for helping me to better understand your current reality!
Tagged with: ScienceLogic Customer Symposium
Picking up where we left off yesterday, VMware offered up a new Hybrid Cloud Service offering in March of this year. This offering allows VMware customers to migrate their cloud services to a public cloud, saving money in the move through shared equipment. The idea is to have common management, orchestration and security models without changing existing apps. However, this solution does not support OpenStack – making the ability to switch cloud providers difficult. (Remember, this is in context of the company’s acquisition if Nicira which made it a contributing OpenStack member). Part of the reason for this contention is that OpenStack is effectively doing what VMware first did – creating an abstraction layer above the infrastructure – of which it believes ESX to be a core component.
Soon after that announcement, the company, along with mothership EMC, spun off a new platform called Pivotal. Pivotal brings several teams and technologies from both VMware and EMC — including Greenplum’s Hadoop (now Pivotal HD), Greenplum Database (fused with Hadoop as a new database known as HAWQ), CETAS, Pivotal Labs, Gemfire in-memory database, the Spring Application Framework and the Cloud Foundry PaaS platform. The goal of the platform is to enable the new wave of predictive big data applications, with VMware playing the role of infrastructure provider to this platform.
The Inside Scoop
Since making the announcement, VMware has taken the stance that its hybrid cloud services will target the middle to higher tier of services; no more, no less. And their SLAs are expected to remain at this tier. The question is whether starting so far behind the curve with respect to an IaaS offering can really have an appeal to an audience that has already been conditioned to AWS-like offerings in the market. Perhaps it’s no coincidence then that VMware has partnered up with Savvis, who itself is in a battle against the managed service providers like Rackspace – and the combined ecosystems could benefit both organizations.
AWS vs VMware
While AWS is pushing its users from the public cloud to draw in a hybrid usage model, VMware is effectively coming at this from the other side: pushing the enterprise to the cloud. In other words, it’s telling its customers that it will take any apps they have and move them to the cloud – keeping it firmly on their own virtual infrastructure. AWS will make your app cloud ready and has very much of a utility/volume route in mind. VMware is not inclined to compete in the utility game because of the license overhead with which it has done so well in legacy infrastructure in the enterprise. The play is to keep embracing and dominating the enterprise space, while giving itself every opportunity to dip into the AWS party.
The fine line dance:
In essence, this vHCS announcement is VMware’s prevention of blood-loss to AWS and Microsoft Azure. However, with 4,000 participants in its VMware Service Provider Program (VSPP) representing about 85% of its revenue (via distribution), VMware is going to have to make these providers very comfortable with the new service. And this is the same game that Microsoft has been playing for years, increasingly marginalizing some providers, and forcing the more assertive ones to move on to higher level value added services. It remains to be seen how this segment of the market will temper the strain of a vendor siting on their customer’s doorsteps with increasingly cloudy (service) capabilities.
From a ScienceLogic perspective – we won’t ever undercut our partners, nor enter the managed services space. We’ve got you covered no matter what your technology of choice
Tagged with: hybrid cloud
, Hybrid Cloud Services
I recently watched a VMware representative being interviewed on Silicon Insider, and it quickly became obvious that VMware is in somewhat of an uncomfortable position when it comes to proclaiming their leadership in the cloud. For those unquestioning enterprises, the line is a truism in that VMware dominates (65% market share) in the large enterprise IT environments; and, virtualization is the major tipping point for what today underlies numerous cloud platforms (but not the majority, since Xen is still the popular default choice for those with less cash in their pockets). However, for those of us that have known VMware as a software technology provider, abstracting away the management of underlying infrastructure to the enterprise, and not a cloud platform service provider, their edict of cloud leader is purposefully ambiguous.
How much cloud does VMware actually do?
The company held its annual VMworld event not too long ago, with the key theme of the conference being centered around software defined data centers; with the two sub categories of networking and storage being top of mind, closely followed by compute and associated management platforms. The objective for VMware is to have everyone virtualize the rest of the data center network and IT components, and build a structure to support hybrid environments. Those hybrid environments are not necessarily on-premise, vs. off-premise 3rd party data center deployments. Rather, this refers to the mish-mash of storage devices and management techniques used to manage that storage; for example, that needs a single control plane. Offering the market more intricate technology is what VMware does well, but that’s a different task to actually operating a cloud platform.
With the big announcement of VMware’s Hybrid Cloud Service (vHCS), questions moved quickly to the commercial aspects of this fantastic announcement. How are instances obtained on this platform? How does the pricing work? What SLAs will VMware commit to – they have stated best of breed SLAs. And that’s no simple task given the number of cloud providers out there. According to 451 Research, there are about 250+ vCloud IaaS providers currently active. No wonder then that VMware has hedged its bet and expanded partnership options for its vCloud Hybrid Cloud Service (vHCS) to include a “franchise” option.
Although Service Providers very much played second fiddle to everyone else in VMware’s world for numerous years, the company has slowly been pandering to the fastest growing consumers of IT infrastructure. vHCS is being operated out of 3rd party data centers (read Savvis) in San Diego and Sterling, Virginia, by hosting operators for VMware and tying the live services to VMware’s portal for users. Not for the first time Savvis is the guinea pig for such a franchised service in the hopes that deployments will move to more Savvis facilities.
Is it really new?
Aside from the fact that the new vHCS is missing things like object storage, is this really a new concept from VMware? It was just a couple of years ago, in 2011, that the company launched its Cloud Foundry service – an open source project that made use of numerous different clouds, developer frameworks and application services to provide an open platform as a service (PaaS). The idea was to allow easy deployment of applications written using Spring, Rails and other modern frameworks. And at VMworld, VMware ironically continued to push the concept of OpenStack support (from a Cloud Foundry perspective), although CEO Pat Gelsinger doesn’t expect it to catch on in the enterprise space so much as the MSP space – an area that VMware wants to penetrate further.
Tomorrow, in part two of this blog post, we’ll fast forward to March of this year, when VMware offered up a new Hybrid Cloud Service offering, allowing its customers to migrate their cloud services to a public cloud, saving money in the move through shared equipment.
Tagged with: hybrid cloud
, Hybrid Cloud Services
Lately, I have been musing about Microsoft systems and application monitoring challenges faced by different types of organizations like service providers, enterprises, and government entities. But recently, a GigaOM Research discussion entitled Hey, CIOs: In a BYOD world, your new job is service provider caught my attention because the panelists postulate the new role of the CIO is “almost like a mini service provider.” In some ways, this theory was old news to me. From my experience, the IT departments of many organizations function just like service providers. They provide shared resources like network infrastructure, data center space, and data protection services as well as offer knowledge and oversight to their constituents. However, there has always been some contention when requests fall outside of our areas of expertise and experience. What usually happens in your organization when an application requires a different database, for example? I have seen us push back or exert a level of control over what can and cannot be supported. When we have no other choice, we must augment staff and tools to accommodate for the new “whatever,” which takes time. A lot more time than it takes for users to subscribe to a service.
However, the panelist suggests this illusion of control has been disrupted or as one panelist put it “that battle is completely and utterly lost.” Lost to “bring your own device” and consumerization. And, the reality is, they are right. Users can buy cloud infrastructure and software-as-a-service with just a few clicks of the mouse. Users expect us to support mobile devices, tablets, and the latest and greatest gadget that arrives in the marketplace. They expect to be able to work from the office, from home and from the coffee shop down the street. And, with these “cloud” services they can. So, where does that leave us? Where does that leave the IT department? The answer is simple; even though these services are outside of our control, they are still our responsibility. Like it or not.
Everyone is happy with these “information services” as one panelist referred to them until something goes wrong. It is at this point where they will definitely become our responsibility. So, do yourself a favor. Monitor these services, backup the data, and have a recovery plan…just in case.
Tagged with: BYOD
, consumerization of IT
, service monitoring
The hypervisor war is over. Layers of abstraction get created to make workload interoperability a turn-key reality like electricity in our homes. It doesn’t matter if the power was generated by Hydro, Nuclear, Coal, Solar, or if it was generated in Oregon or Virginia, etc. We just want to pay for our power. The power companies don’t give you an option and you don’t ever really know where the KWs come from. The customer is so removed from generation it doesn’t matter. Same thing with hypervisors: users don’t care what kind of hypervisor it is, they just care that whatever they need to plug into it will work. OpenStack, VMware, AWS, pick your flavor; it doesn’t matter as long as the workload works and customers can pay as little as possible for it to work as designed.
Right now you have this:
Everyone is saying their power is better because of x,y,z. ScienceLogic is the pole that enables the IT manager to bring it all together. Eventually, we hope you go from above, to this:
The power generation companies buy and sell workloads all the time. So until the abstraction layers can truly move workloads among the players, we are going to be in a war of compute workloads. This is why I believe VMware decided not to be just a software company but a cloud provider so they can be one of the compute generation centers like the AWS of the future. The other challenge with OpenStack is you are relying on someone (being yourself or a service provider) to provide that compute generation work load. Where AWS and, not until recently, VMware provide you the whole solution. OpenStack is trying to be the hypervisor replacement, but I argue again it isn’t about the hypervisor as much as the interoperability of the compute workload. Hence the hybrid cloud is the battle field and the one that can move the work load anywhere first, faster, and cheaper will win the war.
Image: via & via
Tagged with: AWS
Today, we continue our discussion on Microsoft systems and application monitoring with John Proctor, one of the product managers here at ScienceLogic.
Q: John, what are some of the challenges faced by Service Providers when it comes to monitoring Microsoft systems and applications?
A: Service Providers have always been unique when it comes to monitoring tools. Some of the things enterprise tools take for granted, don’t exist in a service provider’s world. For example, there is no such thing as one domain or one organization in the service provider’s world. In most cases, every customer represents a different domain, which could mean dozens or hundreds of unique credentials. We refer to this as multi-tenancy and credential management is just the first part. Security and access control are also important parts of multi-tenancy. Many of the tools out there were not designed for service providers, therefore the service provider’s customer has limited visibility into the tool; if they even have access at all. In these cases, the service provider has to bolt-on additional applications to provide their customers with a portal, which means additional expense and complexity just to offer a comparable solution in the marketplace. At ScienceLogic, our founders are all “recovering service providers” and many of our employees, including myself, are recovering service providers. So, we keep the unique challenges of the service provider in mind. In the case of multi-tenancy, our product has been multi-tenant since its inception more than 10 years ago.
Q: Are there any other challenges unique to service providers?
A: Yes, access to the devices to be managed. In an enterprise, everything is connected. So, deploying a management tool is as simple as dropping a server or application behind the firewall and letting it traverse the existing network infrastructure. For a service provider, the devices to be managed are never in one place; they are in many places. And, they are never connected. Whether the services are being delivered within a data center, to a remote location, or in the cloud, every customer is isolated for security. Service providers must have tools that enable them to deliver their services wherever they are needed while maintaining security.
Q: What has ScienceLogic done to help service providers overcome the challenge of isolation and security?
A: One of the first steps we took was to offer secure distributed collection. When we first started ScienceLogic over 10 years ago we offered a single appliance. However, we quickly saw the need for a distributed model and within a year of starting the company we enhanced EM7 to support a distributed architecture. This allowed our service provider clients to collect information from many locations, like behind a customer’s firewall, in the cloud or from a dedicated management network within a multi-tenant data center and then securely transfer that information back into a central repository. This allows them to manage everything from a single interface regardless of where the managed device is located or to whom it belongs. This is becoming even more important as adoption of virtualization and cloud computing accelerates.
Q: How does this specifically apply to monitoring Microsoft systems and applications?
A: Microsoft is a huge target for hackers and they receive a lot of attention when vulnerabilities are discovered. Therefore, monitoring a server in a remote location requires security. In the past, this has been challenging with protocols like WMI. Since WMI communicates on a random high level port by default, limiting access was difficult. One solution for these situations is to deploy a collector behind the firewall. Another solution would be to leverage PowerShell. Ever since Microsoft introduced PowerShell as their preferred management interface, securing the communication has become much easier.
Q: What about Microsoft systems and applications hosted in the cloud?
A: To service providers, the cloud looks a lot like any other remote environment. In many cases, these devices are monitored just like any other device using collectors and agents. In other cases, we have encountered a few challenges with these traditional approaches. For example, if we monitor an instance of Windows Server delivered from Amazon’s Cloud using a remote collector and agent, the monitoring traffic adds to the bandwidth cost of the instance. To offset this, Amazon has provided an API that we can use to collect usage statistics, but we found that they limit the number of requests. In order to provide similar visibility for these cloud instances, we developed a proxy for these requests as well as a version of our software that can run as an Amazon instance and be installed behind the firewall in the cloud.
Thanks, John, for sharing some of the challenges faced by service providers. Please join us next time when we will explore the challenges faced by enterprises when it comes to monitoring Microsoft systems and applications.
Tagged with: application monitoring
, Microsoft systems
, Network Monitoring
, Service Providers
Here at ScienceLogic we are constantly looking for ways to help our customers. We realize that some of your bosses may be trying to find reasons for not attending our annual customer Symposium. Never fear, we’re here to help. We met in a conference room, locked the door, and opened a case of Red Bulls. Here is the result:
You DON’T need to network with other ScienceLogic customers and share ideas.
You DON’T need to learn new ways to take full advantage of your ScienceLogic investment.
You DON’T need to learn what’s new with IT monitoring from ScienceLogic.
You DON’T need to learn about how EM7 can support your use of Cisco’s Nexus platform.
You DON’T need to pick-up powerful, practical ways to use our Smart™ Actions enhanced RBA.
You DON’T need to learn about new and exciting ways to use our True™ Multi-Tenancy.
You DON’T need to meet one-on-one with key technical experts to answer any questions you may have.
You DON’T need to build the valuation of your business.
You DON’T need to capitalize on the key capabilities we provide for managing the dynamic data center.
You DON’T need to hear the keynote address by Richard Plane, CTO of Data Center and Cloud Practice at Cisco.
We’re a bit biased, but we think the reasons to attend far outweigh the reasons to not. But really, what could be better than attending a FREE Symposium? Look forward to seeing you there!
Customers with a current maintenance agreement can register for free at: http://signup.sciencelogic.com/customer-symposium
Tagged with: Customer Symposium