I had the pleasure again this year of attending Cisco Live last week in San Francisco. Last year, I focused on attending ACI sessions and wrote a short blog on what I had learned about ACI from Cisco Live. Last year, very few people at the show seemed to know what ACI was. In contrast, this year everybody knew what ACI was and many were investigating the technology for a future roll out. ACI sessions were packed with many I attended having wait lines for those who didn’t reserve a spot.
This year I thought I would write about our experiences working with ACI over the last 2 months. As a Cisco Partner, we built a monitoring solution for Cisco ACI. ScienceLogic provides the most comprehensive monitoring tool in the industry which provides visibility into the entire IT Stack. Support for ACI is just one more piece of the complex IT puzzle.
Some of the key items that really helped us develop a solution for ACI are as follows:
- dCloud – We started our project using Cisco dCloud. This was a really cool virtualized lab environment with support for many technologies. For ACI specifically there are 7 different environments that you can select from. Full access to all components is provided via a VPN, which enabled us to develop over 80-90% of our solution. We needed to go to a physical system only when we needed the actual fabric in place so that the attached endpoints could be discovered. It is possible that this could also have been done with the simulator but not when operating in the dCloud environment since it was very clear that the LLDP from the server to the leaf switch would not be supported and this is needed for the leafs to discover the endpoints.
- ACI Simulator – The ACI simulator is a fully functioning ACI system that simulates the APIC, with 2 leafs and 2 spine switches. The Simulator was fully functional and really enabled us to quickly learn the ACI technology as well as the APl. The simulator leverages APIC production SW so what works on the simulator works on the production APICs and we had no trouble moving from the simulated environment to a physical environment. The simulator also provided a mechanism to insert faults and alerts which really helped in integrating this aspect into our monitoring system.
- APIC and the Object Model – The APIC is the brains of ACI. The APIC is the repository of the very complex but well-documented management information tree. The APIC provides centralized access to all the fabric and tenant related information. The APIC provides very powerful scoping and filtering capabilities that make it easy to get the exact data you need. Scoping capabilities allow you to specify the scope of the query. For example, you can query an entire subtree by identifying the class name and requesting all the children under that class. You can then further reduce the scope, for example, by specifying a subtree class and finally you can apply a filter to specifically pick the objects that meet any criteria you specify. Filtering allows you to select only objects that match your filter requirements.
- Visore – Visore is an object browser that lets you retrieve objects by class name or class type. This tool was critical in developing the user stories for our development team. The tool lets you browse the management information tree moving by up and down the tree as well as exploring all the relationships between objects. The following shows a screen shot of the Visore browser. In this case we were looking for the Client Endpoints object class and Visore returns the 3 instances of that class. The ? instantly brings up detailed documentation about that object. The green parentheses allow you to click on them to either see all the children of the object or the parent of the object.
- API Inspector – Since the APIC GUI relies on the APIC API, the API inspector allows you to see exactly what the APIC GUI is querying to display/update the data. This was another critical tool that came in handy to enable us to quickly figure out what objects were being use to drive the APIC displays. For example the first picture shows the APIC displaying the virtual machines that make up the EPG while the second screenshot shows the API inspector which shows the requests and responses that were sent to the APIC to generate the screen. This is incredibly helpful when trying to better understand the overall object model and what objects represent what data on the APIC GUI.
- SDKs – Cisco does support a python SDK. However, we did not make use of that due to its memory footprint size. Instead, we used the REST API directly.
In summary, working on ACI was a pleasure. This was one of the most well-done APIs I have worked on. The API, along with several tools made supporting this complex technology relatively easy. I have to really give Cisco credit for not only building what seems to be a fantastic product, but also focusing on all the tools needed to easily integrate this product with other products.
After a busy week of Cisco Live activities in the World of Solutions, customer and partner meetings, and a stellar Aerosmith concert, it was time to hear the last keynote of the event. Past Cisco Live final keynotes provide thought provoking insight into the future of technology but from the unique perspective of often-mercurial guest speakers. This year venerable TV host, actor, and narrator Mike Rowe fittingly took the stage in blue jeans, t-shirt and a baseball cap. He started his story with one of the funniest blooper videos from his Discovery Channel show, Dirty Jobs. It was another reminder of the wide breadth of experiences one man has endured in a quest to profile real people doing the jobs that make life possible.
Mike kicked off his presentation by sharing a grotesque and upsetting story that started him down the path of reporting on America’s awful and unnoticed jobs that happen every day. As the host of San Francisco’s “Evening Magazine,” he was moved by his ailing father to pay tribute to the working class by reporting his show within a city sewer. What followed was likely the filthiest, most disgusting experience one could imagine having a major city sewage system. I’ll spare you the details but the point was made: spending time in the shoes of hard working men and women was a humbling and transformational experience. So much so that he ended up dedicating his career to exposing these jobs no matter how nasty and repulsive they can be.
As his show gained popularity, the economic downturn of 2008 was the anagnorisis, or discovery moment, when Mike created the mikerowWORKS foundation. The goal of his foundation is to educate the population on how disconnected we’ve become as a society on the fundamentals. According to Mike, modern society has lost touch with how fortunate we are with every day services such as abundant food, electricity, roads, and plumbing. We tend to focus more on efficiency than effectiveness. This is true with the example of higher education as a sign of success vs. mastering a skill. We have conveniently forgotten the need for skilled workers to run and operate our modern world. His message was poignantly delivered in a remake of an older college promo poster to Work Smart AND Hard.
Mike Rowe wants to change our perspective on work. He has a call to action to use our IT knowledge to help connect the world using technology but don’t forget that skills and art remain critical to our country’s future. IT may be the plumbing of the digital economy but hard working skilled people keep us running smoothly. Let’s chip away at prevailing stereotypes while keeping America awesome, either one router or one dirty job at a time.
Did you attend Mike’s keynote address yesterday? Let us know what you thought about it in the comments below!
Tagged with: cisco live
, Cisco Live 2015
, Mike Row
It comes as no surprise that when asked to discuss the future of technology during Cisco Live’s luminary keynote yesterday, Peter Diamandis focused heavily on the rapid acceleration of change and innovation globally. Global growth is universally anchored by the innovations brought to life through today’s technology advances – and innovation has never happened faster than it is happening today.
Over the past 10 years, the cost performance of bandwidth, computing, and storage has sharply declined, opening up these essential resources to a staggering portion of the global population. This rapid drop in cost has also lowered the cost associated with launching an Internet Tech Startup, effectively enticing many entrepreneurs and change-makers to turn their dreams into reality.
So, how do you compete in a world where change is happening faster than ever? According to Diamandis, you must disrupt. Not only disrupt stagnant industries with new, innovative ideas, but also disrupt your own ideas. Challenge your own processes and solutions to continuously streamline and improve your business.
Disruption is the foundation to Diamandis’ Exponential Framework. This framework can be seen in today’s biggest technology-based companies like Uber, AirBnB, and Apple. What do they all have in common?
They dematerialized their products or services. Essentially, these companies have removed excess “material” or middlemen and streamlined their offerings to be laser focused and highly impactful. They demonetized their competition by serving up a tailored experience that their industry predecessors simply couldn’t provide. But before they did all of that… they disrupted their industries.
At ScienceLogic, we understand challenging the status quo. In fact, our entire business was formed with the intent of disrupting the stagnant IT monitoring industry. Our founders were confident they could make a product that reimagined traditional monitoring solutions and that it could be delivered in one simple code base.
Gone are the days of using multiple tools to compile a single report, identify the root cause of an issue, or discover devices. Diamandis spoke about where technology is going to take us in the future, and ScienceLogic’s monitoring solution is already there.
Did you attend Peter Diamandis’ luminary keynote on Tuesday, June 9? Tell us what you thought about his presentation in the comments below!
Tagged with: cisco live
, Cisco Live 2015
For many of us who have made IT the center of our professional careers, CiscoLive is one of the biggest annual industry events you can attend in our industry. Cisco’s main event consistently delivers huge crowds, quality attendees, top vendors, and compelling informational sessions. Having attended many CiscoLive keynotes myself, there is another constant you can always count on: a dynamic keynote presentation from their stalwart CEO and Chairman, John Chambers. But, this year is different. This was to be John Chambers’ final keynote as CEO before handing the reigns to Cisco veteran Chuck Robbins.
After a colorful and loud opening by the up and coming pop sensation OK GO, Mr. Chambers took the front stage one more time to share his insight into recent IT trends and to shed light on Cisco’s future strategy. The keynote presentation had a core theme around FastIT and the Rise of the Digital Age. As with past keynotes, Mr. Chambers reflected on a changing market with many disruptions and transitions.
With the pace of change increasing every quarter, successful companies (and countries) will be required to quickly adapt to become more digital or face being disrupted by faster more agile competitors. Given Cisco’s broad portfolio and experience, the keynote focused on their Digital strategy differentiators: architectures and intelligent networks, compute-network-storage, ACI (SDN, SLN, NFV), security, IoT/IoE, cloud/Intercloud, unique processes and finally, a shift in focus towards outcomes.
As with most of his CLUS keynote presentations, John credited Cisco’s market share success on being able to adapt quicker than their competitors by investing in market transitions along with changing their culture and focusing on “Exponential Thinking.” Yet this year’s presentation had a different tone and a sense of urgency. He emphasized that we are at an inflection point regardless of your industry. If you don’t reinvent yourself in today’s economy, you will not exist in 10 years. Cisco believes that their focus on architecture is the basis for enabling the IT transition to a digital business. They also believe that focusing on acceleration, simplicity, operational rigor and culture will keep them in their leadership position. Given their massive product portfolio and partner ecosystem, you can’t count Cisco out from staying at the top of their industry.
By managing the transition from the information age to the digital age will define the winning companies and countries in the years to come. John Chambers has a proven track record of success over the last 20 years so we should pause and take heed on his parting comments: disrupt or be-disrupted.
Did you attend the welcome Keynote on Monday, June 8? Tell us what you thought of it in the comments below!
Tagged with: cisco live
, Cisco Live 2015
We’re a few short days away from the kick off of Cisco Live in San Diego, CA! The week is packed with excellent speakers and great information sessions; how do you begin to narrow down that you’ll attend each day? Let us help you out.
We combed through the Session Catalog and selected five awesome cloud-focused sessions to add to your itinerary during Cisco Live! See below for the full roundup.
- Data Center and Cloud Strategy – Planning the Next 5 Years* (PSODCT-2088)
WHEN: Tuesday, June 9 – 10:00-11:00 AM
WHERE: 7B Upper Level
WHO: Shashi Kiran – Sr. Director, Data Center & Cloud Networking at Cisco
WHAT: Explore some of the top strategies organizations can factor into their planning cycles for building the next generation data center over the next 3-5 years. (*This session is also offered on Wednesday.)
- Demystifying How to Move Your Production Applications to the Cloud (PCSZEN-1009)
WHEN: Tuesday, June 9 – 1:45-2:15 PM
WHERE: Pre-zen-tation Showcase
WHO: Mark Duvoisin, VP of Sales at Dimension Data
WHAT: Most enterprises will agree that cloud is the way to go, whether it’s via public, private or a hybrid technology – but where do they begin? How can your organization benefit from this next step to the cloud? During this session, attendees will get an easy-to-use checklist for selecting production applications and migrating them to the cloud.
- Data Center and Cloud Strategy – Planning the Next 5 Years* (PSODCT-2088)
WHEN: Wednesday, June 10 – 10:00-11:00 AM
WHERE: 28B Upper Level
WHO: Shashi Kiran – Sr. Director, Data Center & Cloud Networking at Cisco
WHAT: Explore some of the top strategies organizations can factor into their planning cycles for building the next generation data center over the next 3-5 years. (*This session is also offered on Tuesday.)
- Inside Cisco IT: Secure and Simplified Cloud Services with ACI (BRKCOC-1000)
WHEN: Wednesday, June 10 – 1:00-3:00 PM
WHERE: 30D – Upper Level
WHO: Benny Van De Voorde – Network Architect, Cisco; Erich Latchford – IT Manager, Cisco
WHAT: Learn how Cisco IT is designing next-generation application aware solutions and the new policy models required for this journey. Cisco IT is migrating all traditional applications to a radically simplified compute platform and programmable network. Application Centric Infrastructure, or ACI, will significantly reduce the network complexity and improve security, while reducing application deployment cycles. Additionally, the presenters will share the experience and lessons learned from their journey transforming applications and platforms to an infrastructure aware architecture.
- Building Hybrid Clouds in Amazon Web Services with the CSR 1000v (BRKARC-2023)
WHEN: Thursday, June 11 – 8:00-9:30 AM
WHERE: 15B Mezz.
WHO: Chris Hocker – CSE, Cisco; Steven Carter – CSA, Cisco
WHAT: Learn how to leverage Cisco’s Cloud Services Router (CSR 1000V) and related technologies for deploying secure, hybrid clouds for enterprise workloads and network services. Attendees will also learn more on the use of the CSR 1000v with their service provider in extending into the public cloud through Direct connect, and much more.
- Cloud Consumption in North America (PNLCLD-2202)
WHEN: Thursday, June 11 – 10:00-11:30 AM
WHERE: 8 Upper Level
WHO: Ronnie Scott – Data Center TSA, Cisco; Greg Walker – Business Development Manager, Dimension Data Canada; David Senf – VP, Infrastructure & Cloud, IDC Canada; Gerald McElwee – Makita North America
WHAT: Who is using cloud services? What are they using them for? What can we learn from their experiences? Those are some of the questions this panel will discuss. They’ll look at public, private and hybrid cloud services, as well as how the “intercloud” will change the marketplace going forward. Plus, many additional cloud-related topics you don’t want to miss.
For more information on any of the sessions listed above, visit the Session Catalog and search using the alphanumeric code listed beside each session title.
Was this helpful? You can thank us in person! Swing by booth 1919 during Cisco Live and enter to win a drone or attend the ScienceLogic Party on Tuesday, 6/9. We’ll see you in San Diego!
Register for the party here: http://www.bit.ly/SLCLUS15
Tagged with: cisco live
, Cisco Live 2015
, hybrid cloud
, Hybrid IT
Every day IT operations teams are pushed to do more, faster, while cutting costs. While legacy IT infrastructure stretches to keep up with the increasing demands and scalability needed for today’s enterprises, cloud adoption presses ahead but brings its own challenges. The costs and complexity of legacy infrastructure are high, causing IT professionals to spend large amounts of time firefighting operational issues instead of innovating for future business needs.
Luckily, with Nutanix hyperconverged infrastructure, those legacy IT roadblocks are a thing of the past. This powerful solution lets organizations build a modular, scalable service platform with predictable costs, all wrapped in a cocoon of hyperconverged simplicity.
ScienceLogic has seen an increase in Nutanix deployments at customer sites, and there is good reason for it. Nutanix intelligence seamlessly strings together pools of virtualized computer nodes and aggregated storage resources without a SAN.
Now, if the datacenter contained nothing but Nutanix technology, the practice of IT would be much simpler. Unfortunately, today’s data infrastructures remain unwieldy beasts with many layers of complexity. Legacy apps, virtualization, and cloud are all part of a roiling mass of dependencies and relationships.
And, that’s where ScienceLogic shines.
ScienceLogic offers customers a scalable, multi-tenant, monitoring and management platform that is capable of visualizing on-premise and cloud resources together through a single pane of glass.
With the help of ScienceLogic’s automatic discovery tool, organizations can profile their Nutanix environment with ease – achieving operational visibility on Day one. How does it work, you might ask?
ScienceLogic’s solution collects and intelligently interprets performance, capacity, availability and health data for all facets of the Nutanix environment. From lowly disks all the way to nodes, hypervisors, containers clusters, and applications – ScienceLogic can monitor it all.
But, all that information means very little unless you can see it. Using ScienceLogic’s solution, Nutanix deployments improve overall “time-to-value” by seamlessly integrating the new technology into the larger datacenter operations framework, including hybrid IT and cloud resources.
Nutanix has demonstrated that hyperconverged infrastructure is a foundational, “next-gen” data center technology. ScienceLogic is proud to be a Nutanix partner. And, you can’t argue with success.
Tagged with: Partners
Welcome back to our series on the top 20 hybrid IT tools you need to successfully manage a complex hybrid infrastructure. If you’ve been following along, this is now the third of four blog posts. Did you miss the first two posts? Don’t worry! You can read our first post here, and our second post here.
Today’s post is a nice mixture of monitoring, tracking, and automation. As mentioned in other posts, we welcome any feedback, comments and questions. Please feel free to drop a line in the comment field below.
- Application Monitoring – Understanding the basic level of infrastructure is vital for ensuring the best in service performance. But application performance is also an important factor and should be included in a holistic monitoring framework. Application performance along with operating system level performance and server based monitoring can give a truly holistic view of a service.
- Cloud Management and Monitoring – With the world increasingly moving to multi-cloud environments, having a solution to manage and monitor across different public and private cloud environments is the new basis for the ability to operate as an IT organization.
- Service Level Management – In the end, whether you are an enterprise or a service provider, you are delivering a service to your customers. Most organizations have multiple services with different service levels assigned. Keeping track of those service levels and ensuring you meet them can be a challenge, which is where a product with service level management abilities will help.
- Ticketing – Keeping track of actions performed on equipment as well as incoming help desk requests and actions performed against those requests is one of the most basic aspects of IT service support. Any ticketing solution you examine should be able to either automatically log incidents based upon events happening in the infrastructure or have an integration with a monitoring solution that provides this capability.
- Runbook Automation – IT operations professionals face continual pressure to do more with less. An automation platform can help by reducing the need for human involvement, ultimately freeing up staff to take on more strategically important issues.
Ok, there’s a nice little knowledge bomb to start off your week. Be on the lookout for the last post in this series, covering the final five. Also, if you want to see all 20 in one cohesive place, feel free to download this white paper which details all 20 tools you need for hybrid IT environments.
Tagged with: cloud computing
, IT Operations Management
Welcome back to our series on the top 20 tools you need for successful hybrid IT monitoring! If you missed our first post covering tools #16-20, be sure to check it out here.
If you’ve made it this far in today’s post, you’ve either:
- Already read part 1 of this series
- Skimmed through the bullets
- Skipped everything and are ready to get into the meat of this post
In any case, I won’t keep you waiting. Our post today covers tools #11-15, focusing primarily monitoring the different technology layers in your infrastructure. Let’s get started!
- Network Monitoring – Understanding the health of the most basic elements within your infrastructure, such as switches and routers, is vital to ensure your services can deliver as needed. Without the network functioning, your interdependent systems have no way of communicating and your services simply stop operating.
- Server Monitoring – While virtual technologies get most of the attention in IT environments today, the underlying hardware that provides the platform for the virtualized technologies is equally important.
- Storage Monitoring – With compliance and data retention guidelines becoming even more strict, understanding whether you have enough storage capacity and that your storage is available is crucial.
- Operating System Monitoring – Few organizations are strictly tied to one server operating system. Understanding the CPU performance and memory from an OS perspective can be important when using public cloud based resources, as the public cloud provider may report one CPU performance number, while the OS may experience quite a different performance level.
- Hypervisor Monitoring – Sitting on top of your physical infrastructure are a number of virtualized servers. Understanding the health, availability, and location of these hypervisors is a complex task, with virtual resources spinning up and down in seconds.
That wraps up our second post on the top 20 tools you need to successfully manage a hybrid IT environment! We’ll be covering tools #6-10 in our next post, so stay tuned!
To see all of the tools you need for hybrid IT monitoring in once place, download our free white paper: The Top 20 Tools Needed for Hybrid IT
As always, your comments and thoughts are welcome and encouraged! Is there something we’re missing? Or, just as important, if you think I’ve missed the mark with my first 10 (the 5 in the previous post and the 5 in this one) – please let us know!
See you in a few days back here on the ScienceLogic blog.
Tagged with: cloud management
, hybrid cloud
, Hybrid IT
, White Paper
Hybrid IT is the new standard in many enterprises across the globe, but for many it is also uncharted territory. A common question that we field is, “what tools do we need to ensure the performance of a hybrid IT environment?” However, this seemingly simple question does not yield an answer quite as simple.
For many years there has been a somewhat antagonistic relationship between IT and the rest of the enterprise. In the past, businesses want more services and better performance at a reduced cost. This need has only been accelerated by new consumer focused cloud-based applications, which promise nearly 100% uptime with peak performance.
Historically, when the IT industry has been challenged to do more with fewer resources, they have always responded with innovation. First, there was virtualization. With virtualization we were promised more efficient use of servers and better control over cooling and power costs. This innovation helped for a short window, but cheap compute and storage created an influx of applications focused on using more compute and more storage, because it was available.
Very quickly, IT was again asked to do more with less, and again, it responded through innovation. This time with public cloud based services such as Amazon Web Services (AWS) and Microsoft Azure.
At ScienceLogic, we’ve worked hard to understand these complex hybrid IT environments, including what makes them work well, and where they fail. During this series of posts, we will cover the top 20 tools needed for hybrid IT.
With the arrival of public cloud storage, IT organizations quickly took advantage of reduced costs for compute and storage. At the same time, they took advantage of the wide breadth of levels of service the cloud providers offered, to ensure they have the right service level for the right workload.
This development brought with it a potent cocktail of greater expectations from users, reduced budgets and a complete hybrid infrastructure. This new infrastructure brings us back to our initial question – “What tools do we need to ensure the performance of a hybrid IT environment?”
Take a look below at the first chunk of tools to help you ensure performance of your hybrid IT environment:
- Data Center Infrastructure Management – DCIM solutions monitor the environmentals with a data center as well as monitor some servers and network devices. However, their scope tends to focus on environmentals.
- Power Distribution Unit (PDU) Monitoring – At the most basic level, if you don’t have power coming to your system nothing else can operate. Understanding the status of the back-up batteries and even environmentals such as the temperature of your PDUs sitting in your internal data center can help to either eliminate or mitigate any possible power issues.
- Asset Management – With servers and storage being automatically created and brought down in seconds across both virtual and cloud-based infrastructures, tracking the use of assets has never been more complicated. An IT Asset Management system is designed to help an organization track all of its IT assets, the warranties, the vendors, configurations, etc. and is a must in this hybrid IT world.
- Discovery – An asset management system is only as good as the data within it. A discovery solution is designed to automatically discover any onsite and offsite resources that appear, and automatically load them into your asset management system. This becomes even more necessary in a world where any employee with a credit card can purchase compute and storage capacity in a matter of minutes.
- Device and Dependency Mapping – With complexity only increasing, the ability to understand how all of the different elements in an IT environment relate, is becoming impossible. A device and dependency mapping solution takes care of that concern, by automatically mapping the dependencies across different technologies and elements.
There you have it — the first five tools needed to ensure peak performance of your hybrid IT environment. What did you think? Let us know in the comments below!
We’ll be back soon with the next five tools to help you ensure 100% uptime of your hybrid IT environment. To see the complete list of tools, see our white paper on the topic here: http://m.sciencelogic.com/top-hybrid-it-tools
Tagged with: hybrid cloud
During the 2014 ScienceLogic Customer Symposium, we hosted a session to introduce our plans regarding “tagging” features in our software releases. As you may recall from the session, tagging commonly refers to two types of tags used in the industry – comma separated values (csv) and key value pairs.
Tags are comma-separated-values that can be assigned to any interface in order to filter on interfaces of interest. The second tagging method, Custom Attributes, may be thought of as key value pairs. Our Custom Attributes come in two varieties: Base and Extended
Tags were introduced for interfaces prior to our 2014 Customer Symposium, with plans of introducing custom attributes in early 2015. I’m happy to report to our customers that our new software release, 7.5.4, introduces initial support for custom attributes.
As we examined use cases for custom attributes, we decided there were two distinct use cases that each warranted a unique way of handling key value pairs:
Base attributes belong to every entity of a given type you assign a base attribute to. Currently, we support the following entity types: device, asset, interface, vendor, and theme.
A base attribute is very useful when integrating with 3rd party systems. If you wanted to tie ScienceLogic into an existing CRM tool, you’ll want to have the resource IDs from that CRM tool stored in EM7 so that the two systems are closely correlated. One might create a base attribute of “CRM_device_id” that could be used to reference the third-party CRM from within EM7 without having to inject any additional data on the CRM side.
Extended attributes only belong to specific entities. Let me provide an example of a situation where you will find extended attributes handy:
Imagine you want every physical router to have an attribute identifying plug type, which would leverage a custom attribute of “Connector Type” with most devices having a value of “C14.” Because only a subset of devices will have a “connector type,” you would use an extended attribute.
Another example of an extended attribute would be adding a “WAN Type” attribute to only WAN interfaces that hold verbose common speed (T1, E1, T3, 10Mb, 100Mb, etc.). I would not want to see the “WAN Type” attribute listed on every interface since in only is relative to WAN Links.
With the 7.5.4 release, we have introduced the initial API commands to create and edit custom attributes. In addition, the first GUI element of custom attributes has been introduced as an option in the active device selector to dynamically manage group membership leveraging custom attributes.
As with building a house, one must build in layers: Plans, foundation, framing, plumbing, electrical, roof, drywall, etc. As we embark on 2015, I’d say the 7.5.4 release has many foundation elements and some framing. We’re on our way to constructing the nicest house on the block.
For those who are comfortable with the API, you can see and start testing the functionality under /api/custom_attribute/. For those not familiar with the API – additional features, functionality, GUI and more are being working on as I type this post.
We plan on adding incremental functionality in each release cycle throughout 2015 – and we’re off to a great start!
Tagged with: hybrid cloud