Since announced publicly on April 7, the Heartbleed (also known as CVE-2014-0160) bug has caused quite a stir in IT circles. Some 60-70% of web-based applications are suspected to be, or have been, at risk. The furor has even called into question the code quality of open source software (which is actually better then proprietary software overall (ref: here). ScienceLogic reacted quickly to the news about Heartbleed and determined that all currently shipping and supported versions of EM7 are not vulnerable.
Where is SSL used in EM7? It’s used between the collectors and the CDB in conjunction with MySQL (both CU and MC components). As with a lot of open source software, there are a number of variants that allow us to choose the right mix of performance, function, and tried-and-true reliability. In 7.3.6.x and earlier versions we didn’t use OpenSSL, but rather “yaSSL” which is known to be not vulnerable. Starting in 7.5, we will be using OpenSSL, and as we have readied the GA release, we have upgraded the MySQL version that ships with EM7 to the most current version: 5.6.18 which includes OpenSSL libraries that have addressed the vulnerability (version 1.0.1g.).
Similarly, when we first learned of the vulnerability, we also looked at our public facing customer portal hosted here at ScienceLogic (portal.sciencelogic.com). The portal was vulnerable (like many websites) but we immediately changed the OpenSSL version to eliminate the vulnerability and in keeping with best practices, we re-keyed the ScienceLogic certificate – not just for the Support portal – but the wild card certificate (*.sciencelogic.com) for all of our web-based services used internally and with our partners.
Back to EM7 the product for a moment; keeping a strong security posture is a constant battle but it’s one that we’re committed to. An example of that commitment is our regular penetration testing performed by an independent 3rd party. This is essentially “white hat” testing that we pay for to ensure EM7 is always meeting industry best practices. For our customers, a copy of the testing report is available upon request. Additionally, we’re in the final stages of JITC certification (http://jitc.fhu.disa.mil/) that enables our customers to offer an unparalleled security posture and one required by many US Federal Government agencies.
Tagged with: EM7
ScienceLogic was recently featured as one of Software Advice‘s top six IT Asset Management User Interfaces (UIs).
Given the breadth of visibility our network monitoring software provides, we aim to make visualization into your IT stack clear. We do this through charts, graphs, and reports that have simple drill-downs if you want more detailed information.
Example of a network monitoring dashboard
Victoria Rossi of Software Advice, said, “We chose ScienceLogic as one of our favorite UIs because it offers so much information on a single screen. It was a big win for us both in terms of breadth and clarity of communication.”
In her article, Rossi points out the visual organization of ScienceLogic dashboards allow users a through, but still easy-to-grasp overview of their systems. We’ve seen other network monitoring products provide a wealth of information, but display it in ways that are almost unintelligible. At ScienceLogic, we perform UI testing to ensure our customers can see exactly the information they need at a glance.
“We also like that users can custom-design their own dashboard—this ensures that the information they see is relevant and intuitive for them,” continued Rossi.
In fact, we’ve made our UI so easy to use, you can build a custom dashboard in less than five minutes. This is one of many reasons we’re chosen over the competition time and time again.
Request a demo to find out more about our network monitoring platform.
To see more examples of network monitoring dashboards, view our dashboard gallery.
Tagged with: dashboards
, network monitoring software
, user interfaces
Given all the movement with Amazon Web Services (AWS) and Google Compute Engine (GCE) lately around cloud adoption, I was wondering if VMware had thrown in the towel. Pat Gelsinger, CEO of VMware, spoke about VMware’s approach from last year’s announcements around VMware NSX with the move toward a Software-Defined Data Center (SDDC). Pat focused on four key elements to enable the SDDC:
- Virtual Compute of ALL applications: The idea that applications have to be more flexible and resilient in order to be mobile and elastic.
- Storage has to align to the demands of the Applications: When Apps need more IO or more capacity the storage needs to be able to predict and auto adjust to the needs of the applications.
- Virtualize the networks for Speed and Efficiency: If compute and storage are more agile and dynamic in nature the network can’t be the bottleneck to enable the applications to move to get the resources they demand.
- Management: Tools enable automation.
The elements that VMware has clearly focused on are Compute (ESXi Hypervisor), Network (NSX), and Storage (vSan). However, the area that Pat breezed by was the element around management. VMware has tools for automation and orchestration. However, the area that always seems to be the last element worked on is Management suite. Just because you abstract and automate doesn’t mean management and visibility of resources is less important. I would argue it becomes much more important. Workloads under full automation without visibility and oversight can spin up or down out of control, costing money and priceless time. Don’t get me wrong I understand VMware and others are building tools to address this issue. I believe the fundmental difference with technology vendor specific tools is exactly that. They are focused on the company’s technology and nothing more. That problem lends to visibility gaps. I believe ScienceLogic EM7 provides a much better view into your workloads no matter whose technology and how far along the technology adoption path you might be on.
Pat shifted near the end of his keynote focusing on one piece of technology that I believe and agree with Pat is a huge advantage with VMware’s position and cloud adoption: “Hypervisor becomes the Ubiquitous Enforcement Layer”
The software layer has always provided context around the workloads and the hardware layer has always provided the isolation. The hypervisor provides the best of both worlds. It allows for a secure and visible plane for security across the SDDC. The outstanding question for me, is does this refined layer get us closer to cloud interoperability or does it enhance vendor lock in?
Tagged with: Amazon Web Services
, Google Compute Engine
, Software-Defined Data Center
, virtual compute
You know you work for a growing company when one of its senior trainers still has bags under his eyes from his across-the-world trip to train some of our customers in Australia and Singapore.
This here, ladies and gentlemen, is the first blog I am writing as ScienceLogic’s Customer Advocate Specialist. Every day, I work alongside the people who put together the fantastic training programs that ScienceLogic offers.
Our most popular course is the ScienceLogic Certified Professional class, and over the past year our professionals have taught it at our Reston, Virginia headquarters, in Southern California, and most recently, in Melbourne, Australia and Singapore. In just the one year that it has been offered, we have had hundreds of participants.
The ScienceLogic Certified Professional Training course lasts three days and is beneficial for any user of the EM7 system. Most of the participants are engineers, according to Eric Chambers, one of our Senior Technical Trainers. They are usually the engineers who have the responsibility for meeting the monitoring requirements in their environments.
Rome wasn’t built in a day, and EM7 certainly wasn’t built in three. EM7 users know it is a big, comprehensive product, so some people may ask what they could possibly glean in just three days. So I asked Eric Chambers, what exactly is the goal for participants at the end of the course?
“The goal is to make the customers aware of all the features available on EM7 and how they work together,” he said, adding that it is comprised of a lot of hands-on exercises through participants’ very own lab EM7 systems. Many of these engineers work with some of our features on a daily basis, so it is great for them to discover everything else the tool can do.
To date, Chambers said he hasn’t been disappointed with any of the classes. Asia-Pac may be a different terrain, but all EM7 users aren’t lacking in the brain.
So, while you may want to attend the class to hear Eric’s tales of the beaches in Australia and the fried carrot cake in Singapore, there just won’t be time for that. There is a lot to cover in a little bit of time.
For that reason, ScienceLogic is actually in the planning stages of offering an Expert level course, which in the words of Chambers will “dive more in depth on how to set up a lot of the features and monitoring in the product.” This class is supposed to debut later in 2014, and we will most certainly be keeping all of our customers posted.
Tagged with: network monitoring training
, ScienceLogic Certified Professional
, ScienceLogic training
Monday we received the email that “The trucks are on their way to LV.”
I got the feeling that Samuel L. Jackson must have had in the Jurassic Park scene when he said “Hold on to your butts” when he had to cycle the power in order to re-boot the computer system to get the park back operational, all the while hoping a Raptor wasn’t going to pay them a visit while all the electronic doors were unlocked. “Thanks Newman!” (actually Dennis Nedry/Wayne Knight)…
OK, there are no raptors that will eat the InteropNET team of engineers if something goes wrong, but that one emails triggered lots of memories. So, what does this mean? You see the Interop show has an immovable deadline, and it is this weekend. 15,000 or so hi-tech attendees and hundreds of vendors with every latest bleeding edge piece of gear are going to show up in Vegas starting Sunday and it all just has to work. No other option exists. Registration, Classrooms, Trade Show Vendors, Wireless Access, all if it. You might be thinking, “So what?”
Well, you see, today it is just an empty convention floor with no booths, no network and no network gear.
Today, both vendor and volunteer network engineers are flying/driving into Vegas, which is when the real fun begins.
- Unload a semi-truck full of network gear and supplies
- String miles of fiber optics and copper cabling (transport and booth drops)
- Connect the LAN
- Connect the WAN
- Connect the IDF racks
- Set up wireless
That covers high-level physical stuff. But for ScienceLogic, it’s all about providing visibility and feedback. So, from the first moment of network connectivity we become the eyes and ears of the network. Assisting the team during setup to validate all equipment is up and responding properly, validating network connectivity availability as well as making sure we don’t have any bad connections, or dirty fiber by watching device and interface key metrics, etc. While this is going on, we’ll be finalizing our dashboards for the audience and booth visitors to see. These views are both for production use by the NOC engineers as well as for the attendees to get a sense of the equipment and monitoring we are performing.
Interop VMware: (SFO vCenter)
What is it that we monitor with automated Event Management and notification for the NOC team? Everything! Power, temperature, UPSs, switches, routers, servers, systems, network services, web sites, DNS, virtual machines, hosts, storage, AWS, and more. ScienceLogic is proud to be here for our 6th year, and last year we won Best of Interop. There are many reasons for the win and being chosen to be the eyes and ears of Interop, but I feel it boils down to being able to monitor everything anyone puts on the wire or in the cloud and do it all in about a week. We are told that others have tried, but no one other than ScienceLogic has ever done it.
This year at the show, we will be announcing a revolutionary new technology that will link the public, hybrid, and traditional data center together for true service delivery. I’m not going to spill the beans here, but please visit our booth, come on a NOC tour, or watch for our next press release to hear more about it.
See you in Vegas!
Tagged with: Interop
, network operations
Last week we hosted a webinar with our CTO Antonio Piraino, and 451 Research’s Michael Coté. The webinar covered how to turn your ugly duckling IT operations into a swan. You can watch the recording on-demand or review the top 10 signs below:
Tagged with: 451 Research
, it operations
I am never quick to accept an invitation to a webinar. We only have so many hours in a day, and, it would take something very relevant in order for me to close out of my e-mail completely and ignore all of my incoming calls for an entire hour.
I had the pleasure of observing a test run of ScienceLogic’s upcoming “Ten Signs Your IT Ops are Really Ugly” webinar, hosted by our CTO Antonio Piraino, and 451 Research’s Michael Coté. It is definitely worth tuning into, and not just because the graphics on the slides are fun and Coté is a former coder turned comedian.
As indicated in the title, Antonio and Coté will address the common red flags in networks everywhere – signs you may need a new approach in making your network more efficient, and well, prettier. Both Antonio and Coté have sat through their fair share of webinars that went on just a little bit too long. And, Coté himself said he never wants a webinar to turn into a “technology is awesome” lecture.
Chances are, when you tune into the webinar, you will discover that your network is indeed ugly. Buying a development tool isn’t easy – these are big, comprehensive tools, and at times very pricy. And then, actually deploying the product is rarely seamless. Many networks have a whole slew of management tools, all made by different manufacturers, so of course they don’t all fit together perfectly when installed initially – and maybe not ever. The webinar will swiftly move through all the signs you may need to reinvestigate your development operations.
Coté and Antonio won’t tell you what to do to solve your problems – they know we don’t have all day. I like the way Coté put it – each bullet point that the hosts go through are like “bookmarks for the people to go and investigate more.”
So, bottom line, if you are not 100% certain that your network is clean, perfect and pretty, then you should definitely be logging into this webinar on Tuesday at 11 EST. If any of your best and talented engineers are wasting their precious time and brain power on fixing every day, routine network problems, you will definitely find this webinar relevant.
And, did I mention the cool graphics? Here’s a sneak peek of one:
Tagged with: 451 Research
, it operations
As the Holiday Season is wrapping up and we get ready to head into 2014, I got to thinking about what is in store for us ahead. By us, I don’t just mean ScienceLogic; I mean all of us – the entire cloud market. After all, that’s where the future is headed, isn’t it? To a place where everything is stored and run inside the cloud? You may not have totally jumped on that bandwagon just yet, but you probably still agree that the cloud will soon be a critical aspect of the technology world.
With this in mind, I turned my attention to ScienceLogic’s 2014 Cloud Computing Predictions Survey results that were released earlier this month. As a former analyst (programmed to examine, evaluate, and expose facts), I have decided to share my findings with you.
A good starting point for predicting the unpredictable (as is the future of any market) involves looking at what the participants of the market plan to do. 50% of the industry said that they will be increasing their overall IT spending in 2014, while only 15% reported a decrease in IT spending; in other words, three times as many people will spend more on IT in 2014 than spend less. I would suggest that this increase represents the biggest spending increase across a company’s entire budget: a definite positive signal for our market.
Next I’d like to probe a little deeper, into the less comfortable and less examined zone of the individuals; the actual people out there who are making these decisions. When asked if they will personally make more or less money in 2014, almost 50% of people said that they expect to earn more. But just think about that question for a second. Do you actually know how much you’ll have earned 365 days from now? Do I know? I certainly don’t know. But it’s amazing, half of respondents were confident that they’ll be hitting higher numbers next year. There’s an amazing phenomenon that has been proven over the years: if you ask the industry a financially motivated question and the majority answer favorably, that event tends to happen. What’s more, if people believe they are going to earn more, they typically have a sense of better job security. More job security means that they can afford to take those higher risks that tend to have a higher reward. And the prediction becomes a self-fulfilling prophecy.
This raises the question of compensation – are these people fairly rewarded for the work that they do? 42% of people believe that they are currently underpaid in their position – and nobody wants to stay in an underpaid job. Yet at the same time, 50% of people believe that they’ll earn more next year (i.e. will no longer be underpaid). Apparently people think that things are generally getting better, leading to the belief that next year will be better for them.
In contrast to that optimistic view, there’s still a quarter of the workforce that is more worried about their job security going into 2014 than they were a year ago. It appears that the market is split: people are either very confident, or very not confident. The question is, why? If half of the industry thinks that things are getting better money-wise, then why are these 25% worrying about their job security so much?
I’d hazard to guess that this fear is stemming from a questioning of abilities – less than half of people feel very well educated in the technologies required for them to do their job well, according to survey results. This means that over 50% of people making cloud computing decisions do not feel adequately prepared to do their job.
Does that seem a little (or perhaps a lot) worrisome to you? Perhaps not, if you are in the minority of technologically-savvy cloud decision makers. But for the rest of you, I’d be concerned. It’s the same self-fulfilling prophecy as the one I mentioned above – if you don’t feel prepared to succeed in your role, you’re probably not going to succeed.
So in 2014, everyone is increasing their budget by 20% and increasing spending on IT – yet the guy who is making those spending decisions does not even feel well-educated on the subject. That sounds like a recipe for disaster. Playing it safe would mean shying away from these innovative technologies in favor of the tried-and-true (or is it outdated and obsolete?); but in doing so, you run the risk of becoming outdated yourself. Is there an opportunity to save yourself from the embarrassment of losing face (or worse) from the fallback of a poorly educated decision?
It’s a little unrealistic to expect a busy corporate executive to find the time for formal education in becoming a cloud expert, but there are shortcuts out there that remove the educational middleman. They come in the form of tools that will inadvertently provide one with the necessary confidence for proper decision-making. These tools collect and analyze data from all of the different aspects of infrastructure and spit out the results in a way that’s understandable even without an engineering or computer science degree. These easy-to-use managing platforms (like ScienceLogic) removes the confusion and help you wrap your ahead around a system you may not understand anything about. In other words, it takes you from inadequacy to empowered and ready for more innovative work.
Now when the next big technology arrives, you’re no longer jumping off of a cliff into the unknown – rather, you’re just jumping off a diving board into the pool. You may not be sure whether the water will be icy cold or a balmy 80 degrees, but you can rest easy knowing that you aren’t going to hit your head in a shallow rocky bottom. All it takes is a tool that does the educated part and spits out the information you need for the comfortable and confident part, that in turn makes operating in the cloud – or any new technology area – a more confidence inspiring experience.
So here is your final leaving point – you don’t have to be bold enough to jump off a cliff, but you should be staking out the area. The time will come when you need to be in that water, but it will be a much easier transition if you’ve already discovered a much lower diving board.
Tagged with: 2014 Predictions
, Antonio Piraino
, budget increases
, cloud computing
, cloud predictions
, IT budget
, IT spending
, personal salary
, ScienceLogic Survey
Happy Holidays Everyone!
As this year comes to a close, it is a fitting time to recap the year and what is now a monumental 10 years since starting ScienceLogic. Looking back, it has been an incredible journey thus far, but as incredible as it may seem, it still feels as if we are just getting started on the next and most important leg of our continued business acceleration. As I was performing as Santa Claus for the 5th year in a row in our ScienceLogic Gift Exchange & Cookie Swap Holiday event last week, it reminded me how engaged, inspired and aligned our entire ScienceLogic team is today. As we attack an ever evolving set of business, customers, and product goals we continued to be energized by the belief that we are building the industry’s best fully integrated monitoring/management platform. From the very inception of ScienceLogic 10 years ago, the thousands of interesting use case scenarios embedded in the product have been bound by the common thread of building a technology that improves the lives of our customers.
As we head into 2014, the IT industry is undergoing a tectonic shift towards cloud computing. As storage, server, horizontally scaling apps, SDN, security, and the Software Designed Datacenter become more inextricably connected and integrated, the demands for IT to be proactive is more complex than any other moment in time. So my recommendation is to stop worrying about whether you have a true cloud and focus on extending your monitoring/management tools across internal and external compute/application infrastructure so you achieve the cost, resiliency, and capacity benefits that seizes upon the promise of cloud transformations.
The best is yet to come from ScienceLogic as we head into the New Year with the greatest momentum in our storied history. We will continue to align our core strategic and tactical initiatives with our goal to deliver spectacular business outcomes for our customers. From the entire ScienceLogic team we wish you a blessed holiday and a thriving New Year!
Tagged with: happy holidays
, Network Monitoring
Last week, Gartner offered us a free ticket to their Data Center conference in Las Vegas. Who can pass up a free ticket to a major Gartner event, right?! The fact that it was in Las Vegas also added some enticement, as I needed to cash out some of my old gambling investments from the past. After the event, as I was flying back home, and thinking about the trip, two things were fresh in my mind. First, only losing a crisp $10 bill at the slots, and not the usual amounts made me feel pretty good. Second, some of the most interesting things I learned after meeting with analysts, vendors, and attending their sessions; here are the top 5 most interesting items from my many pages of notes.
1. People still get locked into the Big Four.
According to a poll taken at the conference on current and desired future monitoring platforms, 40% of the Big Four’s current customers attending the conference want OUT (in other words, they would like to use other options in the future). In contrast, all of the other vendor options increased in the percentage of attendees who would like to use them in the future compared to the percentage currently using them. This is just further proof that the Big Four is still locking their customers in while their customers want to get out.
2. The future is all about custom monitoring
According to one of the analyst sessions I attended, although the underlying basis of IT infrastructure may be the same for most organizations, each organization at the business level is different. Therefore, while we all work in differing environments, custom monitoring, which can include custom dashboards, custom analysis and data collection, becomes an important necessity. I was pleased to hear this, because ScienceLogic is focused on making it super easy to create custom dashboards and reports, as well as building integrations with other data sources via our RestFul API. And if one was looking into Business Service Management (BSM) as a way to manage their IT, then the ScienceLogic customer analytics, grouping, and views is a perfect fit.
3. Agentless monitoring is still king
The new focus in monitoring for Gartner is ‘Availability Monitoring’ and one of the key ingredients for this is Agentless technologies. The main benefits of agentless technology is that it removes the requirement to install, update, and maintain additional software on every computer from which data collection is needed – essentially, it is a huge boost in efficiency. I was glad to hear this too, because ScienceLogic has been agentless from day one and will always be a single platform for monitoring infrastructure, the cloud, and ‘the internet of things’ – end-to-end in a single dashboard.
4. Monitoring the cloud, from the cloud
A number of companies are now providing software services from the cloud – ServiceNow, NewRelic, and even ScienceLogic. It’s amazing how just a few years ago there was no IT organization that would even consider hosting their internal IT data in the cloud; I guess they’ve realized that if Salesforce.com can store company revenue, customer and client data in the cloud, then IT data can also live in the cloud. My guess is that there will come a day when pretty much every IT tool will be cloud based because it makes it easier, less expensive, and the IT staff don’t have to deal with managing in-house software.
5. Application Performance Monitoring (APM) is still very much needed and required by Gartner clients
In the midst of all these shifts toward directions of the future, it’s important not to forget about the ‘tried and true’ directions- APM. Today, there is still much more money spent on APM than on Service View (i.e. BSM), and Service Management. While clients may be asking Gartner for Service Views, at the end of the day they still want to monitor applications that support a particular business service. So next time you you see a Gartner report about BSM, it most likely will fit your APM needs just as well. As for ScienceLogic and APM – yes, we’ve got that too!
Tagged with: APM
, Big Four vendors
, cloud computing
, custom monitoring
, Gartner Data Conference
, Gartner Data Conference 2013
, Gartner DC
, Gartner DC 2013
, Las Vegas
, monitoring vendors