• For Specialists

    A blog for service providers focused on QoS, QoE, and network performance. Join us for in-depth analysis of industry news, technology, and solutions driving performance in mobile networks, cable MSO business services, cloud and data center connectivity, enterprise WANs, and financial networks.

  • Join us Live

    We host webinars each month covering topics from solution design to performance assurance technology and demos of our latest innovations. Join us engineers online for tailored insight and Q&A with our network engineers.
    Upcoming Webinars:

    Click Here
  • Learn on YouTube

    Accedian is the Performance Assurance Specialist for mobile networks, enterprise to data center connectivity, and service provider SDN. With dozens of videos covering network performance and QoE, our YouTube channel is a unique training resource.

    Watch Now

Thursday, November 30, 2017

What Makes an Environment Cloud-Native? Virtualization vs. Cloud Computing


The increasing attraction of virtualization and cloud computing technologies has pushed cloud- native environments further into the spotlight. Here, Michael Rezek, VP of Business Development and Strategic Partnerships at Accedian, explores the what, how and why of cloud-native.

What is a cloud-native environment?
MR: It starts with virtualization—the first step to cloud computing, which separates infrastructures through the use of software.

Once infrastructure is virtualized, it provides access to shared pools of configurable resources, such as servers, storage, or applications, which are provisioned with minimal management effort. Only after all component infrastructures in a system are virtualized does the environment truly become “cloud native.”

What characteristics must cloud-native environments possess?
MR: Simply having a virtualized infrastructure or virtualized software application does not equate to being cloud-native. According to the National Institute of Standards and Technology (NSIT), a cloud-native environment should possess all of the following characteristics:
  • On-demand service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service or pay-per-use model

Is ‘cloud-native’ merely a hyped-up concept, or a proven technology?
MR: There is a great deal of hype surrounding cloud computing and virtualization, and coincidentally cloud native environments, with many wondering about their legitimacy as proven technologies.

In some ways, cloud-native is in danger of becoming seemingly unattainable since even the smallest dependence on hardware can disqualify an application from being designated as such. However, it’s important to remember that virtualizing functions is a gradual process and won’t happen in one fell swoop.

Cloud-native infrastructures and cloud-native applications are proven technologies, with the former successfully deployed in data centres globally, while the latter are foundational to software-as-a-service (SaaS) models.

Cloud-native applications—or cloud-native virtual network functions (VNFs)—is where it gets a little complicated. While cloud-native VNFs do exist, their successful transition to a virtual network is still being proven. The challenge lies in the apps (VNFs) not necessarily being centralised in a data centre but instead spread across the network’s distributed “web” with numerous endpoints. This compromises the resource-pooling element of a cloud-native environment due to the sometimes limited pools and clusters of resources at remote points of the network edge, or at aggregation points.

What are some benefits benefits of cloud-native environments?

MR: Three benefits in particular stand out, and they’re all focused on networking and computing happening automatically:

  • Auto-provisioning – in a telecom use case, operators can leverage a cloud environment allowing customers to self-serve their applications without the need for additional resources to deploy new services.
  • Auto-scaling – operators do not have to pre-provision purpose-built networking equipment manually, but can instead use software orchestration to automatically spin up and tear down compute resources, according to customer demand.
  • Auto-redundancy – redundancy can be automated by leveraging pools and clusters of compute resources along with a redundancy policy
What are some challenges with cloud-native environments?

MR: To reap the significant benefits of cloud-native environments, operators must overcome challenges in several key areas:
  • Organizational: Historically, IT designed and managed the compute infrastructure, while network organisations were responsible for designing and operating the network. However, in a cloud-native environment, these two distinct domains must morph into one cohesive unit—leveraging each other’s skills, and cross-training one another to design and operate a holistic virtualized network environment.
  • Financial: Telecom companies have invested billions of dollars in purpose-built networking infrastructure. Migrating to a cloud-based physical infrastructure with software licences will therefore be a costly undertaking. Indeed, the cost may create a barrier for smaller operators wanting to migrate their infrastructures.
  • Network management: successfully managing a software-based network consisting of numerous VNFs is no easy feat. Operators will need to oversee the integration of open APIs to each VNF for management and control. The complexity of working with different standards in an open environment is especially difficult.
How would you sum up the future for cloud-native environments?

MR:
Communication service providers (CSPs) are under mounting pressure to transform their systems and infrastructures to become more agile, able to deliver services at a push of a button. Failure to do this will not only see them struggle to keep up with the competition, but potentially lead to their demise. For today’s telco, the answer lies in the cloud, and specifically in cloud- native environments—which, if implemented correctly, can boost network efficiency, reduce expenditure, and enhance quality of experience (QoE) and quality of service (QoS) for subscribers.


Tuesday, November 28, 2017

Report: Analytics Key to QoE for Complex Wireless Networks


Traditional wireless networks are not especially ‘smart’ or efficient, mostly serving to convey as much data as possible, without regard to importance of the service or app that data is tied to, noted Senza Fili in a recent report on analytics for big data and network complexity. But these networks and the traffic they carry are becoming more complex, so they must also become smarter and more efficient. Such a transformation is possible with analytics. 

“Network architectures continue to evolve, with the addition of Wi-Fi access, small cells and DAS, C-RAN, unlicensed access, carrier aggregation, VoLTE, virtualization, edge computing, network slicing, and eventually 5G. Managing networks that grow in size and complexity becomes difficult because there is a need to integrate new elements and technologies into the existing network in order to benefit from the technological advances,” explained Monica Paolini, founder and president of Senza Fili, and the report’s author in collaboration with RCR Wireless.

The solution is putting predictive analytics to work optimizing these networks, using automation paired with machine learning and artificial intelligence to extract and correlate valuable information from many data sources, generating insightful advice or predictions.

“The value analytics brings to optimization comes from expanding the range of data sources and taking a customer-centric, QoE-based approach to optimizing end-to-end network performance,” Paolini concluded. This give operators the ability to decide “which aspects of the QoE they want to give priority to, and surgically manage resources to do so,” rather than limiting optimization to throughput and selected KPIs like latency or dropped calls.

Focus on QoE
That ability to fine-tune traffic management is very valuable to operators, who are necessarily shifting to a quality of experience (QoE)-based model despite supply exceeding capacity in an environment with limited resources.

While operators may not be able to realistically give all users everything they want, Paolini said, they can still greatly improve the user experience using resources available, in a way that’s more fair and better aligned with what subscribers value most--for example, the quality of video calls taking higher priority than the ability to watch videos on YouTube and Netflix.

“Lowering latency across the board may be less effective in raising QoE than lowering it specifically for the applications that require low latency,” she explained. “The average latency may be the same in both cases, but the impact on QoE is different.”

Other advantages of this approach include the ability to:
  • Avoid over-provisioning parts of the network
  • Decide which KPIs carry the most weight for improving QoE
  • Determine the best way to allocate virtualized resources
  • Find root causes of network anomalies that result in QoE issues
  • Manage security threats
Toward Predictive Analytics
“The ultimate goal of analytics is to become able to predict the imminent emergence of an issue before it causes any disruption to the network,” Paolini stressed. Machine learning and artificial intelligence will make that possible, eventually.

For now, fitting analytics to each operator’s specific requirements involves making tradeoffs, most notably involving time (for how long, and at what time increments, data is collected) and depth (how macro or micro the data is).

“As operators move toward real time and closer to the subscriber, the volume of data that analytics tools have to crunch grows quickly, increasing the processing requirements, and hence the effort and cost,” Paolini pointed out. “But the reward is a more effective optimization.”

It’s more effective because it’s more targeted.

“Congestion or performance/coverage issues are likely to emerge at different places and times, but only in a small portion of the network…” and therefore “optimization has to selectively target these locations and not the entire network. And the lower the time resolution and the more precise the geolocation information, the more powerful the optimization can be,” Paolini concluded.

Adopting Analytics - Drivers
Operators are driven by several factors to adopt customer experience-focused analytics:
  • Cost and Services - Subscribers are more demanding and less willing to spend more. 
  • Usage - Subscribers use wireless networks more, and in new ways, resulting in a richer set of requirements.
  • Technology - 4G now and 5G in future benefit from more extensive and intensive use of analytics. 
A Cultural Shift
For operators, expanding the use of analytics is appealing but not without its challenges. The greatest of those “is likely to come from the cultural shift that analytics requires within the organization,” Paolini said in the report. “The combination of real-time operations and automation within an expanded analytics framework causes a loss of direct control over the network – the type of control that operators still have by manually optimizing the network. Giving up that level of control is necessary because the complexity of networks makes automation unavoidable.”

Yet, still, operators are increasingly committing to analytics because the benefits outweigh the challenges, enabling them to:

Improve support for existing services
  • Create new services
  • Customize service offerings
  • Optimize QoE for specific services and applications
  • Understand better what subscribers do, individually and within market segments
  • Implement network utilization and service management strategies that set them apart from competitors
Put another way, end-to-end network efficiency and service provisioning enabled by analytics result in significant financial benefits for an operator, by delivering:
  • Increased utilization of network resources
  • Lower per-valuable-bit cost
  • Lower operational costs
  • Better planning
  • Network slicing and edge computing
  • Better customer service and product offerings
  • Third-party revenues

Tuesday, November 7, 2017

Active Synthetic Network Monitoring: What It is and Why It Matters



When it comes to tracking and optimizing the performance of wireless networks and the services they support, what’s more important: passive monitoring or active (synthetic) monitoring? The short answer is that both play a role. However, given the increasing complexity of modern broadband wireless networks, and the direction in which they are evolving, it’s fair to say that active monitoring plays a more and more important role. As such, it’s important to understand how it compares to and complements passive monitoring, and why it matters.

Active monitoring simulates the network behavior of end-users and applications, monitoring this activity at regular intervals — as fast as thousands of times a second, if required — to determine metrics like availability or response time. It is a precise, targeted tool for performing real-time troubleshooting and optimization.

By contrast, passive monitoring analyzes existing traffic over time and reports on the results. It is best for predictive analysis using large volumes of data, identifying bandwidth abusers, setting traffic and bandwidth usage baselines, and long-term traffic analysis to mitigate security threats.

Why Active?

Cloudification, encryption, decentralization and SD-WAN have fractured the value of traditional, passive monitoring. As a result, blind spots are consuming both networks and services, with service providers losing sight of the majority of traffic.

Where visibility still exists, it’s insufficient. Compute and hardware-intensive passive solutions are slow to report, typically taking well over a minute to digest and produce metrics. But, today’s dominant traffic flows—software as-a-service (SaaS), web, social media, streaming media, and real-time communications—are dependent on significant volumes of transient sessions between servers and clients, cloud and apps.

Consider, for example, that 90% of all TCP sessions last less than 3 seconds, and consume less than 100 bytes each. It’s not surprising, then, that the majority of network downtime is from short term degradation, not sustained outages. Passively monitoring aggregate traffic, reported every few minutes, miss the vast majority of short term events that impact the services subscribers use the most.

Active monitoring takes a proactive approach, overcoming these visibility gaps, while delivering the enhanced precision, and the sub-second insight required to monitor and assure dynamic services. Active-synthetic monitoring is a lightweight, ubiquitous and standards-based approach that faithfully replicates application and network traffic with unrivaled precision and frequency. This creates a constant, controlled source of detailed metrics that accurately characterize quality of service and experience (QoS, QoE).

Active Monitoring Uses

Beyond its broad use as a controlled, targeted, QoE optimization tool, active monitoring is valuable for:


  • Introducing new services. VoLTE, Internet of Things (IoT), SaaS, over the top (OTT) and other digital services can be simulated and monitored, throughout their service lifecycle. Active test allows service providers to assess network readiness before deployment, and the impact new services have on other applications when they go live and begin to consume the network. 
  • Applying the benefits of virtualization to network and service assurance. Active monitoring is easily virtualized. When surveyed by Heavy Reading, service providers overwhelmingly pointed to active testing, and virtualized agents, as driving their quality assurance efforts and budgets. Passive probe appliances were the most likely to lose budget, after years of consuming significant capital expenditure. Most passive solutions are nearly 500% more expensive than the active solutions enabled by virtualization.
  • Enabling automated, software-defined networking (SDN) control. Active monitoring provides a complete, high definition view of end-to-end performance that service providers can use as real-time feedback for automated control, and with machine learning and artificial intelligence (AI) for root cause analysis, predictive trending, and business and customer journey analytics. Exceptionally granular, precise data with a wide diversity of statistical perspectives means analytics can converge and correlate multi-dimensional events an order of magnitude faster than coarse, passive monitoring data permits.
Breaking this down further, the advantages of active monitoring include:

  • Massive, multinational network monitoring scalability on lightweight, virtualized compute resources
  • Carrier-grade precision that enables undisputed service level agreement (SLA) reporting, independent of traffic load or location
  • Ability to resolve detailed, one-way measurements to microsecond precision
  • Ability to measure performance, QoS, and QoE at any location, physical or virtual, end-to-end
  • Tests can be targeted at known issues, locations or services on demand to accelerate troubleshooting
  • Streaming metrics tailored to machine learning, analytics, artificial intelligence (AI), automated SDN control and management and network orchestration (MANO)
  • Ability to proactively and predictively test services before they come online: VoLTE, IoT, business services, SaaS, and impact of OTT
  • Fully standards-based, and interoperable over multi-vendor, multi-domain infrastructure
  • Eliminates the need for taps, packet brokers or “SPAN” ports
  • Segments networks and services to allow rapid fault isolation, precise trending, and predictive forecasting
  • Proactive, in contrast to passive monitoring which is always “after the fact”
  • The ability to baseline and threshold using reliable and consistent data
  • Predictive mechanism to facilitate network improvements/adjustments based on subtle changes/symptoms vs. customer complaints
Passive monitoring still plays a role in managing and optimizing wireless networks, and always will. But, the complex nature of these networks today and tomorrow also demands the use of active monitoring for real-time, proactive, automated QoE optimization. Don’t leave home without it!