Packet Pushers: Detect, Diagnose, And Act Podcast

Packet Pushers: Detect, Diagnose, And Act Podcast


Podcast: Download (46.2MB)
Keith Sinclair, CTO and progenitor of NMIS, joins Greg Ferro on Packet Pushers

They discuss:

  • What NMIS does and how it works
  • Protocol support including SNMP, WMI, SSH, RESTful APIs, and more
  • The persistence of SNMP
  • Opmantek’s approach of detect, diagnose, and act
  • Automation capabilities
  • How NMIS uses dashboards, portals, and maps

Why we need a Dynamic Baseline and Thresholding Tool?

Why we need a Dynamic Baseline and Thresholding Tool?

With the introduction of opCharts v4.2.5 richer and and more meaningful data can be used in decision making. Forewarned is forearmed the poverb goes, a quick google tells me “prior knowledge of possible dangers or problems gives one a tactical advantage”. The reason we want to baseline and threshold our data is so that we can receive alerts forewarning us of issues in our environment, so that we can act to resolve smaller issues before they become bigger. Being proactive increases our Mean Time Between Failure. If you are interested in accessing the Dynamic Baseline and Thresholding Tool, please Contact Us.

Types of Metrics

When analysing time series data you quickly start to identify a common trend in what you are seeing, you will find some metrics you are monitoring will be “stable” that is they will have very repeated patterns and change in a similar way over time, while other metrics will be more chaotic, with a discernible pattern difficult to identify. Take for example two metrics, response time and route number (the number of routes in the routing table), you can see from the charts below that the response time is more chaotic with some pattern but really little stability in the metric, while the route number metric is solid, unwavering.
meatball-responsetime - 750
meatball-routenumber - 750

Comparing Metrics with Themselves

This router meatball is a small office router, with little variation in the routing, however a WAN distribution router would be generally stable, but it would have a little more variability. How could I get an alarm from either of these without configuring some complex static thresholds?

The answer is to baseline the metric as it is and compare your current value against the baseline, this method is very useful for values which are very different on different devices, but you want to know when the metric changes, example are route number, number of users logged in, number of processes running on Linux, response time in general, but especially response time of a service.

The opCharts Dynamic Baseline and Threshold Tool

Overall this is what opTrend does. The sophisticated statistical model it builds is very powerful and helps spots these trends with the baseline tool. We have extended opTrend with some additional functionality so that you can quickly get alerts from metrics which are important to you.

What is really key here is that the baseline tool will detect downward changes as well as upward changes, so if your traffic was reducing outside the baseline you would be alerted.

Establishing a Dynamic Baseline

Current Value

Firstly I want to calculate my current value, I could use the last value collected, but depending on the stability of the metric this might cause false positives, as NMIS has always supported, using a larger threshold period when calculating the current value can result in more relevant results.

For very stable metrics using a small threshold period is no problem, but for wilder values, a longer period is advised. For response time alerting, using a threshold period of 15 minutes or greater would be a good idea. That means that there is some sustained issue and not just a one off internet blip. However with our route number we might be very happy to use the last value and get warned sooner.

Multi-Day Baseline

Currently two types of baselines are supported by the baseline tool, the first is what I would call opTrend Lite, which is based on the work of Igor Trubin’s SEDS and SEDS lite, this methods calculates the average value for a small window of time looking back the configured number of weeks, so if my baseline was 1 hour for the last 4 weeks and the time now is 16:40 on 1 June 2020 it would look back and gather the following:

  • Week 1: 15:40 to 16:40 on 25 May 2020
  • Week 2: 15:40 to 16:40 on 18 May 2020
  • Week 3: 15:40 to 16:40 on 11 May 2020
  • Week 4: 15:40 to 16:40 on 4 May 2020

With the average of each of these windows of time calculated, I can now build my baseline and compare my current value against that baseline’s value.

Same-Day Baseline

Depending on the stability of the metric it might be preferable to use the data from that day. For example if you had a rising and falling value It might be preferable to use just the last 4 to 8 hours of the day for your baseline. Take this interface traffic as an example, the input rate while the output rate is stable with a sudden plateau and is then stable again.

asgard-bits-per-second - 750

If this was a weekly pattern the multi-day baseline would be a better option, but if this happens more randomly, using the same-day would generate an initial event on the increase, then the event would clear as the ~8Mbps became normal, and then when the value dropped again another alert would be generated.

Delta Baseline

The delta baseline is only concerned with the amount of change in the baseline, for example from a sample of data from the last 4 hours we would see that the average of a metric is 100, we then take the current value, for example, the spike of 145 below, and we calculate the change as a percentage, which would be a change of 45% resulting in a Critical event level.

amor-numproc - 750

The delta baseline configuration then allows for defining the level of the event based on the percentage of change, for the defaults, this would result in a Major, you can see the configuration in the example below, this table is how to visualize the configuration.

  • 10 – Warning
  • 20 – Minor
  • 30 – Major
  • 40 – Critical
  • 50 – Fatal

If the change is below 10% the level will be normal, between 10% and 20% Minor, and so up to over 50% it will be considered fatal.

In practicality this spike was brief and using the 15 minute threshold period (current is the average of the last 15 minutes) the value for calculating change would be 136 and the resulting change would be 36% so a Major event. The threshold period is dampening the spikes to remove brief changes and allow you to see changes which last longer.

Installing the Baseline Tool

Copy the file to the server and do the following, upgrading will be the same process.

tar xvf Baseline-X.Y.tgz
cd Baseline/
sudo ./install_baseline.sh

Working with the Dynamic Baseline and Thresholding Tool

The Dynamic Baseline and Threshold Tool includes various configuration options so that you can tune the algorithm to learn differently depending on the metric being used. The tool comes with several metrics already configured. It is a requirement of the system that the stats modeling is completed for the metric you require to be baseline, this is how the NMIS API extracts statistical information from the performance database.

Conclusion

For more information about the installation and configuration steps required to implement opCharts’ Dynamic Baseline and Thresholding tool, it is all detail in our documentation – here.

7 Steps to Network Management Automation & Engineer Sleep Insurance

7 Steps to Network Management Automation & Engineer Sleep Insurance

 

Quietly, somewhere in an office downtown, bearings designed to last for 25k hours have been running non-stop for over forty-three-thousand. The fan was cheaply made by machine from components sourced over several years across a dozen providers. It sat boxed for weeks before it was installed in the router chassis, which itself was boxed-up. Two months at sea, packed tight in a shipping container, then more months bounced around and shuffled from truck to warehouse, and back to a parcel delivery. Finally, the device was configured, boxed and shipped to its final installation point. Stuffed into a too tight closet with no air circulation this mission critical router been running non-stop for the past five-years. It’s a miracle really that it worked this long.

 

Fan speed was the first thing to be affected by the bearing failure.

Building friction on the fan’s impeller shaft caused the amperage draw to increase to compensate and maintain rotational speed. When the amperage draw maxed out, rotations per minute (RPM) dropped. With the slower fan speed came less airflow, with lower airflow the chassis temperature increased.

 

Complex devices, like routers, require low operating temperatures. The cooler it is, the easier it is for electrons to move. As the chassis temperature increased the router experienced issues processing the data packets traversing the interfaces. At first it was an error here or there, then routine traffic routing ran into problems and the router began discarding packets. From there things got much worse.

 

It’s late Saturday evening and your weekend has been restful so far. A night out with your significant other, a movie and dinner. It’s late now and you’re ready for bed when your phone chirps. The text message is short;

 

Device: Main Router

Event: Chassis high temperature with high discard output packets

Action Taken: Rerouted traffic by increasing OSPF cost

Action Required: Fan speed low, amperage high. Engineer investigate for repair/replacement.

 

A fan went bad, what’s next?

The system had responded as you would – it rerouted traffic off the affected interface preventing a possible impact to system operation. Adding a note to your calendar to investigate the router first thing Monday morning you turned in for a good night’s sleep.

 

Our Senior Engineer in Asia-PAC, Nick Day, likes to refer to Opmantek’s solutions as “engineer sleep insurance”. Coming from a background in managed service providers I can appreciate the situation. Equipment always breaks on your vacation time, often when the on-call engineer is as far away as possible, and with little useful information from the NMS. This was a prime scenario we used when building out our Operational Process Automation (OPA) solution.

 

Building a Solution

Leveraging the combined ability of opTrend to identify operational parameters outside of trended norms, opEvents correlates events and automates remediation. With the addition of opConfig configuration changes to network devices are then able to be automated. Operational Process Automation (OPA) builds on this statistical analysis and rules-based heuristics, to automate troubleshooting and remediation of network events. This in turn reduces the negative impact on user experience.

 

 

Magicians never reveal their secrets…but we’ll make an exception.

Now let’s see how this was accomplished using the above example. At its roots opTrend is a statistical analysis engine. opTrend collects performance data from NMIS, Opmantek’s fault and performance system and determines what is normal operation. Looking back over several weeks, usually twenty-six, opTrend determines what is normal for each parameter it processes. It does this hour by hour, considering each day of the week individually. So, Monday morning 9-10am has its own calculation, which is separate from 3-4pm Saturday afternoon. By looking across several weeks opTrend can normalize things like holidays and vacation time.

 

Once a mean for each parameter is determined opTrend then calculates a statistical deviation for the parameter and creates a window of three standard deviations above and below the mean. Any activity above or below these windows triggers an opTrend event into NMIS. These events can be in addition to those generated by NMIS’s Thresholding and Alert system, or in place of.

 

In the example above, opTrend would have seen the chassis temperature exceed the normal window of operation. Had fan speed and/or amperage also been processed by opTrend (it is not by default but can be configured to be if desired) these would have reported as a low fan speed, and high amperage).

 

This event from opTrend would have been sent to NMIS, then shared with opEvents for processing. A set of rules, or Event Actions, looked for events that could be caused by high temperature; often related to interface packet errors or discards. With wireless devices (WiFi and RF) this may affect signal strength and connection speed. A similar result could be handled using a Correlation Rule, which would group multiple events across a window of time into a new parent event. Both methods are relevant and have their own pros and cons.

 

opEvents now uses the high temperature / high discards event to start a troubleshooting routine. This may include directing opConfig to connect to the device via SSH and execute CLI commands to collect additional troubleshooting information. The result of these commands can have their own operational life – being evaluated for error conditions, firing off new events and themselves starting Event Actions.

 

Let’s review the process flow:

  1. NMIS collects performance data from the device, including fan speed, temperature and interface performance metrics.
  2. opTrend processes the collected performance data from NMIS and determines what is normal/abnormal behavior for each parameter.
  3. Events are generated by opTrend in NMIS, which are then shared with opEvents.
  4. opEvents receives events from opTrend identifying out of normal temperature and interface output discards. These events are then correlated into a single synthetic event, given a higher priority, and evaluated for action
  5. An Event Action rule matches for a performance impacting event on a Core device running a known OS. This calls opConfig to initiate Hourly and Daily configuration backups, then execute a configuration change to increase the OSPF cost on the interface forcing traffic to be rerouted off this interface.
  6. opEvents also opens a helpdesk ticket via a RESTful API, then texts the on-call technician with the actions taken, and recommended follow-on activities.
  7. Once traffic across the interface drops the discards error will clear, generating an Up-Notification text to the on-call technician.

 

This is an example of what we would consider a medium complexity automation. It is comprised of several Opmantek solutions, each configured (most automatically) to work together. These seven solutions share and process fault and performance information, correlate resulting events, apply a single set of event actions to gather additional information and configure around the event. When applying solution automations, we advocate a crawl-walk-run methodology where you start by collecting troubleshooting information (crawl), then automate simple single-step remediations (walk), then slowly deploy multi-path remediations with control points (run).

 

Contact Us & Start Automating Your Network Management

Contact our team of experts here if you would like to know about how this solution was developed, or how Operational Process Automation can be leveraged to save on manhours and reduce Mean Time to Resolve (MTTR).

How to manage capacity, before it becomes a problem.

How to manage capacity, before it becomes a problem.

Capacity Management is the proactive management of any measurable finite resource.

 

This blog will help you with a simple to follow outline on how to properly manage capacity, so if you ever have to resolve capacity issues, you are ahead of the curve and ready to implement remediation.

 

Capacity management has been considered by many as difficult to achieve. But all worthwhile achievements take discipline to execute and accomplish. So, with careful consideration, monitoring and planning you can ensure that it becomes manageable and deliverable.

Don’t forget that as part of any new deployment or upgrade, and as budget allows, additional demand should be incorporated into the design, with additional capacity ready to service the new capacity peaks. The new peak load is accounted for and new baselines are created.

 

Analysis Paralysis

 

The overall concept is that you don’t create reports just to create reports. People might read them once and never again. But as it’s automated, they will continue being sent and remain unopened, filtered or archived. This is not the result you want.

 

The behaviour you want to drive is for people to use your reports. So, you create reports that drive actions. For example, node health reports can provide checklists to drive daily troubleshooting, flag maintenance check-ups, apply upkeep maintenance or repair of devices. Use daily event reports to help the engineering team understand what the normal background noise and static is across your network or to drive a cleanup. Then of course weekly or monthly reports. For example, a WAN/interface report to support bandwidth and equipment investment might only need to be produced monthly, but a faster growing capacity consumption resource should be produced weekly.

 

Detecting capacity issues through threshold management.

 

The problem with capacity issues is that they can present themselves in so many different ways, with the result that something isn’t working the way it was, or should be. Just like what I talked about in my blog on bandwidth congestion , a user will report that “some application” doesn’t work like it did yesterday, a capacity threshold alarm has escalated. If you want to learn about root cause analysis, check out Marks video here –> MARKS WEBINAR.

 

Using Opmantek Products to manage capacity

 

Add your devices to NMIS (and while you’re at it, ensure that you have a naming convention to follow, have all your SNMP done and your network documented)

  1. IP, Name and Community String
  2. Assign roles to devices (use the in built Core, Distribution, Access)

Preparing Visibility

  1. Set up regular reports using opReports
    1. If you manage a network choose the network reports
    2. If you manage servers use the capacity report
    3. If you manage servers and networks do steps a + b
    4. Set up the scheduling – Have them emailed once a week in time for your planning and performance review session.
  2. Set up capacity Dashboards, Use TopN views in opCharts
    1. Add TopN and Network Maps to your view (good practise)
    2. Create charts for your most important resources

 

Simple Alarming and Notifications

  1. Enable notifications for critical resource capacity issues – Start with Critical and Fatal only out of this list Normal/Warning/Minor/Major/ Critical/Fatal.

Add more later as you gain insight.

  1. Set up email notification to the right teams based on the Role (Core, Distribution Access) or Type of device (Server, Router, Switch) devices for Threshold events to be sent.

Trending – for predictive capacity planning

  1. Enable opTrend to find anomalies in usage (events) and resources which are continuously trending outside of normal (Billboard)
    1. Notify on critical opTrend threshold events.
    2. Review opTrend Top of The Pops Billboard at your regular capacity review meetings.

 

Simple steps when managing capacity issues as incidents.

 

While not ideal, issues/incidents seen at the helpdesk could potentially originate from a change that took place on the network or in the environment. In a real world, even the best change management implementation or outage may cause a capacity issue somewhere and trigger an alarm.

 

Ask. What has changed? Has something in the environment changed?

 

Typically a capacity threshold breach is an indicator of:

    1. A new service added?
    2. A new demand?
    3. A network change?
    4. Some other change?
    5. A finite asset reaching a predetermined capacity

 

Approaches to Baselining for Monitoring and Support:

 

Look at all your resources and review and categorise your resource types, .e.g Internet Connections, Site links etc.  For each category conclude some baseline usage levels as percentages (Fatal , Critical, Major etc) which will be your starting baseline. It is critical to know your baseline as all your threshold alarms will be triggered at the levels you set and your Notifications of Threshold Alarms want to only be for the more serious alarms. You don’t want to “cry wolf.”

 

Consider grouping your resources, for example: Core, Application, DMZ, Edge, Branch, Internet Links, General WAN etc.

 

And within each group, consider the following resources you want to monitor:

 

CPU, Memory, Bandwidth Utilisation

 

Start by using general thresholds for each based on the peak demands you have seen.

 

These are your proactive warnings that will send an alarm to your management platform. You may want to set some escalation rules for the resource for example:

 

85% – 95% → Major → Alarm Notification (business hours) → to the capacity team

>95%+ → Critical → Alarm Notification (24×7) → helpdesk/NOC

 

Using the trend analysis provided by opTrend, you can identify very Anomalous usage (it’s low when it should normally high at that time of day) or pro-actively look at resources consistently trending up or down vs their normal levels. Hence ahead of time we can start reviewing the resource for appropriate modification (upgrade, downgrade, offloading work etc). As the network continues to grow and support new services, the baseline will change over time (sliding baseline), thus capacity issues may “creep up” on you as alarm thresholds may not be breached all the time to send an alert. It is important to look at the baseline “rate of change” over time as well to determine capacity needs (ex. 10% change over a one week timeframe).  When planning to increase capacity, be sure to allow for the procurement and provisioning time.

I mentioned the sliding baseline and tracking rate of change of the baseline so the capacity issues don’t “creep up”

Agile RMM solutions for MSPs

Agile RMM solutions for MSPs

Remote monitoring and management (RMM) is the process of tracking, monitoring, and managing endpoints for multiple clients. It is mostly used by managed service providers (MSPs) to provide IT services to organisations who outsource their IT requirements. Read on to find out how a self-hosted RMM solution can help MSPs to increase functionality and save on operational costs.

Are you an MSP that wants to replace expensive RMM systems with a better solution?

As an MSP, did you know that you can replace multi-million dollar RMM systems by combining NMIS with opHA and opCharts? Opmantek offers a full-service software solution that is made to scale. Our products can be used in synergy, as a complete solution.

What do our RMM software solution products include?

NMIS

NMIS is one of the world’s most popular network management systems. Manage anything at any scale. Extend NMIS with our modules and increase your performance, awareness and control.

opHA

opHA allows you to boost the performance of applications and deliver high scale and high availability environments, which includes the geographical distribution of the system and overlapping IP address ranges.

opCharts

Featuring dynamic charting, custom dashboards and a RESTful API to visualize NMIS data and more, opCharts provides a single pane of glass through which you can view all managed customer equipment. This allows engineers to drill down from a single device in a remote location, yet still enabling customers to view their own sites privately and in the moment.

opEvents

opEvents effectively helps to reduce the impact of network faults and failures using proactive event management.

opTrend

opTrend allows you to proactively manage network resources by visually analysing key performance metrics.

Why should I choose Opmantek over a cloud-based SaaS solution?

In recent times, there has been a shift towards software as a Service (SaaS) and one-size-fits-all cloud-based solutions. However, we have found that our customers require flexibility and bespoke solutions that can grow with each individual business. Disappointed by current SaaS offerings, more and more MSPs are now looking for evolved solutions.

Facilitates scalability

As you have the control, scalability potential is naturally increased, to enable your RMM to grow with your business. The scalability of the software allows for your needs to be met in the future, not just at this present moment. In today’s unpredictable business landscape, scalability is essential for success. However, as businesses grow and change, many SaaS providers force their users into unnecessary paid upgrades.

More visibility and control over your network

Opmantek software can be deployed in the cloud or on-premise but because you retain ownership of the database and have access to the source code at the core of NMIS, you have more control over your managed devices and network data. Data ownership is another key security concern for many companies, a concern which Opmantek directly addresses.

Easy to integrate with other services

If you already have multiple different products performing unique functions within your network environment, it is unlikely that you will want to or be able to replace them all at once. To make it easy, at Opmantek our RMM software is easy to integrate for a fully cohesive solution. We offer multiple integration options, including for REST APIs (HTTP(S), batch operations and the information provided in JSON files and CSV forms.

Unmatched automation technology

Our automated network monitoring is above industry standard and allows you to provide the best service possible to clients.

We make it easy for you to increase profitability

You can save money for your MSP, with a solution that grows with and adapts to your business, removing the regular expensive upgrade fees charged by SaaS software providers. As part of the changeover period, we offer a full onboarding service. Your designated team will be there with you along the way, answering your questions and making the transition seamless. Our support services can be easily accessed at any time.

A bespoke solution for your business

If you want to experience a RMM solution that is tailored to your business requirements, you can try it out for yourself with no commitment! Simply opt-in for an Opmantek RMM software demo request to get started.

[USER STORY] Communication Giant gains 24/7 unified view of network health

[USER STORY] Communication Giant gains 24/7 unified view of network health

When businesses scale there becomes a point where creating multiple Network Operation Centers makes business sense. The trouble with creating separate sites is maintaining unified visibility on the entire network. This is the exact problem the user story looks at, but they added aggressive expansion plans, so scalability became a factor also.

Book a Demo