[Case Study] Nextlink Takes Network Management to the Next Level with Opmantek

[Case Study] Nextlink Takes Network Management to the Next Level with Opmantek

This Case Study covers how we take organizations from reactive to proactive with the ability to scale and leverage more insights into the projects they’re working on. The following case study details actionable guidance that strengthened internet service provider Nextlink’s Network Management capabilities and took them to the next level. Download the asset below.

Key Points:

Who

Nextlink Internet is an internet service provider delivering high-speed internet and voice services throughout Texas, Oklahoma, Kansas and Nebraska. Since 2012, Nextlink has become one of the largest wireless internet service providers in the nation. 

Challenge

Nextlink wanted to improve the stability of its automated provisioning system as well as its network. Nextlink’s previous monitoring system would send an alarm if one of the previously configured rules was triggered, but it didn’t provide the solution that they were looking for.

Situation

“Our fast-paced operations are growing every day, so we need something that can grow with us,” said Jordan Long, Director of Network Operations, Nextlink. “We wanted something that would not just assist with our operations center, but an automated solution that would monitor for issues throughout our platform, automate the troubleshooting tasks and make configuration changes when an alarm was raised. We also wanted a solution that could share data about our operations to drive new projects.”

Get the Case Study

4 Best Practices For Automating Your Network Management

4 Best Practices For Automating Your Network Management

This excerpt comes from a blog originally posted on MSP Insights

Murphy’s Law states: “Anything that can go wrong will go wrong.” Equipment always breaks when you’re on vacation, often when the on-call engineer is as far away as possible, and with little useful information from the network management software (NMS).

 

It’s critical for a network to be available 100% of the time and always performing at 100%. Network management is a core component of IT infrastructure that is put in place to minimize disruptions, ensure high performance, and help businesses avoid security issues. Network architectures and networking products handle the brunt of the work, but management tools and technologies are essential for picking up the slack and allowing the shift from reactive to proactive strategies.

 

Network automation can automate repetitive tasks to improve efficiency and ensure consistency in network teams. Ultimately, automation will improve the meantime time to resolve (MTTR) and drive down the total cost of ownership (TCO). Network automation enables staff to gain process and configuration agility while maintaining compliance standards. It will help simplify your network and lower maintenance costs.

 

Save Time And Money With Automation

According to Gartner, “The undisputed number one cause of network outages is human error.” As humans, we all make mistakes, which is why businesses must have comprehensive automation in place. Automation can reduce the likelihood of issues being missed by ensuring consistency and reducing the need for tedious manual configuration. It also can save time, money and improve productivity. The following are four steps organizations can take to build a reliable and agile network through automation.

 

1. Implement Operation Process Automation (OPA)

OPA is about getting the right systems in place to automate repetitive operational tasks to improve efficiency and ensure consistency in operations teams. OPA delivers process automation specifically to IT and network operations teams. As well as emulating actions that network engineers take within a network management system, OPA also can perform advanced maintenance tasks, assist in the interpretation of network data, and communicate effectively with other digital systems to categorize, resolve, and escalate potential network issues. Ultimately, OPA is about improving the MTTR and decreasing the cost of operations.

 

2. Improve Configuration Management

When considering automation solutions to scale your business, a critical variable to consider is time saved through automation compared to the amount of time tasks take if performed manually. A significant amount of administration time is consumed managing configurations and firmware updates, which could be better spent on proactive tasks. Organizations looking to become more efficient should consider an automated network management tool that integrates configuration management to reduce the risk of human errors and enable easier implementation of network-wide changes. This concept is not new, and it is the fundamental basis of making impactful decisions on how your organization can scale.

 

3. Single View Multi-Vendor Support

Most networks are composed of elements from multiple manufacturers. This can create challenges when overseeing the elements of each management system. A better, more efficient approach is to find and deploy management tools that offer true multi-vendor support. This will reduce the number of tools needed for day-to-day tasks and eliminate the need for learning and maintaining multiple management tools, which will improve operational responsiveness and efficiency.

 

4. Policy-Based Management Systems

Many common network administration activities should be handled by the network management system automatically. These systems should not require repeated configuration but be configured through a policy that captures the business rules and ensures that devices are handled consistently. Automated device discovery and classification is another important aspect, automatically determining what the device is, what to monitor, and what type of alerts and events will be generated, all without human intervention.

 

Combining People And Process Automation

According to Forrester, 56% of global infrastructure technology decision-makers have implemented/are implementing or are expanding/upgrading their implementation of automation software. It’s important to note that automation does not mean the replacement of individuals. Instead, it can benefit IT workers, by transferring routine and tedious elements of managing networks to machine learning models that can reduce the noise from the vast number of alerts and notifications. For organizations that are looking to scale, a combination of people and process automation will yield the best results book a demo from our experts to learn more.

Book a Demo

How To Leave Work At 5 PM: Visibility, Event Management & Automation

How To Leave Work At 5 PM: Visibility, Event Management & Automation

This excerpt comes from a blog originally posted on Packetpushers.net

As organizations manage increasingly interdependent network infrastructure in an increasingly chaotic world, how can you, as a Network Operations professional, maintain control of your network without losing control of your time?

The answers are: network visibility, flexible event management, and powerful automation. All of this is possible within Opmantek’s network management platform. The software streamlines workflows and lets network engineers and operators accomplish more work with fewer distractions, allowing them to go home on time.

The Importance Of Visibility

We often hear from network engineers that they don’t know what devices are on the network or where they’re located. This lack of visibility introduces security risks and increases Mean Time To Recovery (MTTR). The ability to see as much of the network as possible on a single dashboard allows for fast response times when you and your team need them most.

The robust network visualization tools built into Opmantek’s opCharts and opEvents give you the ability to see a network and react in real-time to precisely what’s happening with confidence. That’s essential for daily operations and in emergencies. For example, did you know that storm-related outages cost the U.S. economy up to $55 billion every year? When a major storm like Hurricane Sandy blasts through your infrastructure overnight, you’ll be able to identify the points of failure and…READ ON.

Book a Demo

Packet Pushers: Detect, Diagnose, And Act Podcast

Packet Pushers: Detect, Diagnose, And Act Podcast


Podcast: Download (46.2MB)
Keith Sinclair, CTO and progenitor of NMIS, joins Greg Ferro on Packet Pushers

They discuss:

  • What NMIS does and how it works
  • Protocol support including SNMP, WMI, SSH, RESTful APIs, and more
  • The persistence of SNMP
  • Opmantek’s approach of detect, diagnose, and act
  • Automation capabilities
  • How NMIS uses dashboards, portals, and maps

Why we need a Dynamic Baseline and Thresholding Tool?

Why we need a Dynamic Baseline and Thresholding Tool?

With the introduction of opCharts v4.2.5 richer and and more meaningful data can be used in decision making. Forewarned is forearmed the poverb goes, a quick google tells me “prior knowledge of possible dangers or problems gives one a tactical advantage”. The reason we want to baseline and threshold our data is so that we can receive alerts forewarning us of issues in our environment, so that we can act to resolve smaller issues before they become bigger. Being proactive increases our Mean Time Between Failure. If you are interested in accessing the Dynamic Baseline and Thresholding Tool, please Contact Us.

Types of Metrics

When analysing time series data you quickly start to identify a common trend in what you are seeing, you will find some metrics you are monitoring will be “stable” that is they will have very repeated patterns and change in a similar way over time, while other metrics will be more chaotic, with a discernible pattern difficult to identify. Take for example two metrics, response time and route number (the number of routes in the routing table), you can see from the charts below that the response time is more chaotic with some pattern but really little stability in the metric, while the route number metric is solid, unwavering.
meatball-responsetime - 750
meatball-routenumber - 750

Comparing Metrics with Themselves

This router meatball is a small office router, with little variation in the routing, however a WAN distribution router would be generally stable, but it would have a little more variability. How could I get an alarm from either of these without configuring some complex static thresholds?

The answer is to baseline the metric as it is and compare your current value against the baseline, this method is very useful for values which are very different on different devices, but you want to know when the metric changes, example are route number, number of users logged in, number of processes running on Linux, response time in general, but especially response time of a service.

The opCharts Dynamic Baseline and Threshold Tool

Overall this is what opTrend does. The sophisticated statistical model it builds is very powerful and helps spots these trends with the baseline tool. We have extended opTrend with some additional functionality so that you can quickly get alerts from metrics which are important to you.

What is really key here is that the baseline tool will detect downward changes as well as upward changes, so if your traffic was reducing outside the baseline you would be alerted.

Establishing a Dynamic Baseline

Current Value

Firstly I want to calculate my current value, I could use the last value collected, but depending on the stability of the metric this might cause false positives, as NMIS has always supported, using a larger threshold period when calculating the current value can result in more relevant results.

For very stable metrics using a small threshold period is no problem, but for wilder values, a longer period is advised. For response time alerting, using a threshold period of 15 minutes or greater would be a good idea. That means that there is some sustained issue and not just a one off internet blip. However with our route number we might be very happy to use the last value and get warned sooner.

Multi-Day Baseline

Currently two types of baselines are supported by the baseline tool, the first is what I would call opTrend Lite, which is based on the work of Igor Trubin’s SEDS and SEDS lite, this methods calculates the average value for a small window of time looking back the configured number of weeks, so if my baseline was 1 hour for the last 4 weeks and the time now is 16:40 on 1 June 2020 it would look back and gather the following:

  • Week 1: 15:40 to 16:40 on 25 May 2020
  • Week 2: 15:40 to 16:40 on 18 May 2020
  • Week 3: 15:40 to 16:40 on 11 May 2020
  • Week 4: 15:40 to 16:40 on 4 May 2020

With the average of each of these windows of time calculated, I can now build my baseline and compare my current value against that baseline’s value.

Same-Day Baseline

Depending on the stability of the metric it might be preferable to use the data from that day. For example if you had a rising and falling value It might be preferable to use just the last 4 to 8 hours of the day for your baseline. Take this interface traffic as an example, the input rate while the output rate is stable with a sudden plateau and is then stable again.

asgard-bits-per-second - 750

If this was a weekly pattern the multi-day baseline would be a better option, but if this happens more randomly, using the same-day would generate an initial event on the increase, then the event would clear as the ~8Mbps became normal, and then when the value dropped again another alert would be generated.

Delta Baseline

The delta baseline is only concerned with the amount of change in the baseline, for example from a sample of data from the last 4 hours we would see that the average of a metric is 100, we then take the current value, for example, the spike of 145 below, and we calculate the change as a percentage, which would be a change of 45% resulting in a Critical event level.

amor-numproc - 750

The delta baseline configuration then allows for defining the level of the event based on the percentage of change, for the defaults, this would result in a Major, you can see the configuration in the example below, this table is how to visualize the configuration.

  • 10 – Warning
  • 20 – Minor
  • 30 – Major
  • 40 – Critical
  • 50 – Fatal

If the change is below 10% the level will be normal, between 10% and 20% Minor, and so up to over 50% it will be considered fatal.

In practicality this spike was brief and using the 15 minute threshold period (current is the average of the last 15 minutes) the value for calculating change would be 136 and the resulting change would be 36% so a Major event. The threshold period is dampening the spikes to remove brief changes and allow you to see changes which last longer.

Installing the Baseline Tool

Copy the file to the server and do the following, upgrading will be the same process.

tar xvf Baseline-X.Y.tgz
cd Baseline/
sudo ./install_baseline.sh

Working with the Dynamic Baseline and Thresholding Tool

The Dynamic Baseline and Threshold Tool includes various configuration options so that you can tune the algorithm to learn differently depending on the metric being used. The tool comes with several metrics already configured. It is a requirement of the system that the stats modeling is completed for the metric you require to be baseline, this is how the NMIS API extracts statistical information from the performance database.

Conclusion

For more information about the installation and configuration steps required to implement opCharts’ Dynamic Baseline and Thresholding tool, it is all detail in our documentation – here.

How To Thrive In A Post-Covid Era: 10 Predictions For Enterprise Network Infrastructures

How To Thrive In A Post-Covid Era: 10 Predictions For Enterprise Network Infrastructures

An enterprise network serves as the foundation for reliably connecting users, devices and applications, providing access to data across local area networks and the cloud, as well as delivering crucial insight into analytics.

But in the wake of a year that was no doubt shaped by COVID-19 and the disruption it brought to the industry, how have enterprise networks been impacted, and what are the requirements moving forward?

What were previously technology nice-to-haves and future infrastructure intentions, are now swiftly becoming business imperatives.

In this blog, we’ll explore our top 10 predictions for network infrastructure in 2021.

 

1.   Cloud Application Delivery

 

The traditional office-based-model has no doubt permanently changed and flexible working arrangements brought forward by the pandemic will continue. A Boston Consulting study from last year found that 63% of employees want a hybrid model whereby they continue to work from home part of the time.

Organizations will further turn to the cloud for application delivery, placing an investment in remote connectivity and new security functionality.

 

2.   Businesses Turn to Big Data and Analytics

 

The requirement for businesses to be agile, change and adapt is more prevalent than ever, and decision-makers need to identify trends and ultimately stay ahead of the curve through outcomes-based strategies.

Big data is becoming an imperative tool in every organization’s arsenal, though its presence is superfluous without the appropriate means to disseminate and analyse it.

We predict this will drive the recruitment of data professionals and further, the simplification in data management through self-service means, accessible to non-data-professionals.

“It’s really about democratizing analytics. It is really about getting insight in a fraction of the time with less skill than is possible today.” – Rita Sallam, vice president and analyst at Gartner.

 

3.   The Year of Mass Adoption for Cognitive / Artificial Intelligence

 

With big data, comes big responsibility and moreover – big processing requirements, which is where AI will be heavily recruited.

2021 will be the year of mass adoption for AI, as businesses of all levels have experienced a paradigm shift into a digital-first model. Corporate networks have been tested through remote working arrangements, uncovering major reliability issues and security threats. IT leaders are looking for a set and forget solutions that automatically provide optimization and security, which is where software such as Opmantek’s NMIS, opEvents, opConfig and Open-AudIT can assist.

Opmantek software is a key system used by IT operations teams across all industries — it acts as the dashboard of a car and tells them how fast everything is going and lets them know when something is faulty. It even predicts future faults, and that’s a big part of the AI. The longer you run our software, the smarter it gets — it learns about your IT Infrastructure and starts to automatically manage it better and deliver better information to the IT operations team.” said Danny Maher, Chairman of Opmantek .

 

4.   Hybrid Clouds in High Demand

 

Agility, speed, security, scalability and compliance are all considerations for IT decision-makers.

Though, there’s never a blanket / one size fits all solution for every business use case, and so the demand for hybrid cloud environments will continue to grow. The traditional model of cloud providers is that of a one-stop-shop. However, we predict as demand grows; cloud market leaders will introduce greater interoperability and further allow users to introduce cloud tools across their existing on-campus networks. Collaboration between cloud providers may even be on the cards as users demand greater flexibility.

 

5.   Networking Virtualization

 

Network virtualization offers many benefits by automating and simplifying processes, including network configuration flexibility, improved control over-segmentation, speed, increased security and cost savings.

According to research by Spiceworks, 30% of businesses currently use network virtualization technology — and an additional 14% plan to use it within the next 2 years.

 

6.   Unified Communication And Collaboration Tools Are Here To Stay

 

End-user adoption is often one of the greatest barriers for IT professionals looking to implement new software. However, seemingly overnight, employees were catapulted into a reality where unified communications as a service (UCaaS) was no longer just an occasional collaboration tool, but rather a necessity of the employment.

We have changed our habits and the way in which we do business. Even as the workforce begin to transition back to office or hybrid office/work from home environments, there’s no doubt that UCaaS is here to stay. Providers will introduce new functionality and continue to diversify their offering to accommodate hybrid working in 2021.

 

7.   WiFi Gets an Upgrade

 

Businesses and consumers alike want things faster, easier and more efficient, and WiFi is no exception. Enter WiFi 6e.

6e not only offers new airwaves for routers to use, but it doesn’t require overlapping signals.

One of the major benefits of 6e is a reduction in network congestion, specifically in areas where users are closely spaced.  As the pandemic continues to unfold, rush hour and crowded spaces are less of an issue, so it may be a waiting game as to when in 2021 we realise 6e’s true potential.

 

8.   IoT (Internet of Things) – More than just Alexa

 

As digital transformation is on the rise, so is IoT and its use cases. A SecurityToday article forecasted that by 2021 there would be 35 billion IoT devices installed worldwide.

IoT is already revolutionizing the way key industries do business, however, healthcare will double down in 2021. Reduced access to face-to-face medical contact has accelerated the need for remote care, and according to Allied Market research – the global internet of things in the healthcare market is expected to reach $332.672 billion by 2027.

 

9.   A Focus on Cybersecurity

 

In light of recent high profile cybersecurity attacks which infiltrated private companies, state and federal organizations by inserting malicious code into trusted software; cybersecurity and secure network monitoring will be paramount.

If you have data or services of value, you need to protect it properly. Keith Sinclair – CTO & Co-founder of Opmantek says, “It is critical to business continuity and data security that you have security controls in your environment to mitigate risk.”

 

10.    Infrastructure Management Software Leveraged

 

Application demands are continuing to grow and networks must respond. Network professionals must find means of simplifying these increasingly complex systems and environments. Here’s where automated network management software will be leveraged.

Opmantek Software serves to augment a network engineering or system administration role. As well as emulating actions that network engineers take within a network management system, it can also perform advanced maintenance tasks, assist in the interpretation of network data and communicate effectively with other digital systems in order to categorise, resolve and escalate potential network issues.

 

 

For more information about Opmantek and the services we provide, get in touch. Our network engineers are available to chat through specific issues you may be facing within your own network environment.

Book a Demo