• For Specialists

    A blog for service providers focused on QoS, QoE, and network performance. Join us for in-depth analysis of industry news, technology, and solutions driving performance in mobile networks, cable MSO business services, cloud and data center connectivity, enterprise WANs, and financial networks.

  • Join us Live

    We host webinars each month covering topics from solution design to performance assurance technology and demos of our latest innovations. Join us engineers online for tailored insight and Q&A with our network engineers.
    Upcoming Webinars:

    Click Here
  • Learn on YouTube

    Accedian is the Performance Assurance Specialist for mobile networks, enterprise to data center connectivity, and service provider SDN. With dozens of videos covering network performance and QoE, our YouTube channel is a unique training resource.

    Watch Now

Tuesday, December 5, 2017

3 Telecom Use Cases for Virtualization


In many markets, virtualization—transforming separate hardware components into software functions—is being pushed for and seems highly desirable. In the telecom industry, a key driver for this transformation is that it promises to help operators reuse existing assets to be more flexible and agile in markets that are often very competitive. But what virtualization use cases actually make sense for telecom operators? And how can they succeed with these use cases? Here, we look briefly at three possibilities.

1. Prepare for 5G

The foundational concept of 5G is to make it possible to deliver any type of telecom service, anywhere. Virtualization makes that possible by enabling the instantiation of services and applications using software running on commercial off-the-shelf (COTS) compute power (e.g. x86) rather than specialized hardware.

The flexibility virtualization introduces is an important change because existing network infrastructures are very static, involving lots of (often proprietary, expensive) hardware. To evolve and make 5G possible, networks first must become virtualized: make them software-driven so one physical network can contain multiple ‘software’ networks (aka network slices), in support of a services-based architecture.

Network slicing—splitting a physical network into multiple virtualized networks—is crucial to 5G because it allows resources to be shared in an elastic way, and for services to be spun up ‘on the fly’ with resources aligned to their specific requirements. The overarching goal of network slicing is to create a global ecosystem in which network devices and user endpoints are able to communicate together—in support of providing highly reliable, ultra low-latency, high availability services.

With network slicing included as a key principle, virtualization is likely to be applied first in the core network to address the infrastructure resource layer. Slices created must be service-based to provide a partitioned network on demand. Separating the control and user planes will allow service function chaining.

5G is not only about pushing the envelope of performance capabilities and meeting high demand, but also delivering flexible, anything as-a-service/on-demand service model. Such a model can only work if the system in use is flexible, enables fast time to market with new services, and has low total cost of ownership. In short, this is a whole new ecosystem with its own set of business requirements.

2. Reduce OpEx and CapEx

Traditionally, to prepare for future service demand in telecom networks, operators have defaulted to over-provisioning—installing much more hardware/capacity than they need at first, to ‘future proof’ their investment. But, this is costly and it often takes years for that capacity to actually be put into service. Not exactly ideal, but often the best an operator could do given the conditions they were faced with.

But this is a blunt approach, and is no longer sustainable given the complexity, scale, and rapidly evolving nature of next-generation networks and the services they deliver. Instead, virtualization provides operators the opportunity to efficiently leverage multi-purpose, flexible processing power in an elastic fashion. If the compute power is already installed, and has capacity available, addressing growing demand becomes an incremental change.

Using a software-based approach reduces costs both directly and indirectly, by delivering:

  • Service agility, accelerating time-to-market for new services
  • High scalability for efficient resource dimensioning and scalability on an as-needed basis
  • Enhanced user experience, enabling new business opportunities
  • Automated deployment that increases operational efficiency
3. Increase Profitability

Using virtualization to reduce OpEx and CapEx has the advantageous result of increasing profitability—indirectly, through lower expenditures. Given the nature of the telecom market, this is quite important; operators (especially those in established markets) are unlikely to make much headway increasing revenue through acquiring more customers. Likewise, although upselling existing customers with value-added capabilities may have some positive impact on revenue, it is unlikely to be an earth-shattering breakthrough. Running more efficient operations, then, remains key to long-term profitability. Virtualization can get operators there.

For example, virtualization not only minimizes the footprint of hardware deployment needed to support services, but also reduces time-to-market for new services (features can be created and service provisioning can be completed much faster) compared with legacy systems.

With this increase in profitability in mind, many operators are considering some of the following virtualization technologies which they expect will simplify functional deployments and reduce costs through increased agility. (Note that not all of these are mature enough to be deployed as-is, and most require some form of customization.)
  • Virtualized core aggregation, which can be applied to use cases like switching components and regulatory reporting. 
  • Security between physical, service, and NFVI domains, which can be applied to use cases like machine-to-machine access control, firewall provisioning, and unified communication (VoIP/PBX).
  • Access network virtualization for the mobile edge; the first step toward 4.5G and 5G that enables strict, KPI-sensitive mobility services to run closer to the edge. 
  • Service network virtualization (allowing many multi-tenant services and components to run on a single physical network), which can be applied to use cases like data center micro segmentation, on-premises vCPE functions, and big data optimization. 
  • Dynamic interconnects between locations (e.g. data centers and enterprise branches), which can be applied to use cases like bandwidth on-demand, SD-WAN, and dynamic VPN services.
  • NFV to replace physical network functions (appliances, gateways, switches, routers), which can be applied to use cases like SD-WAN, gigabit LAN, and firewall services. 
Accedian’s Role in Virtualization

The development of next-generation telecom networks may seem a bit like the Wild West right now, given the number of organizations and groups involved in standards development, open source, and other aspects of wrangling virtualization into usability. But, ultimately, these networks will be based on standards-compliant technology, and will be built using solutions from multiple vendors. End-to-end visibility into network performance, enabling real-time quality of experience (QoE) optimization, will involve virtual probes used to cope with the changes brought about by ‘softwareization’ and ‘cloudification’ of telecom.

Accedian, in our role as an industry leader for virtualized network performance assurance, is heavily focused on standardization and interoperability. We recognize the importance of communication between orchestrators and other devices, and this is reflected in the solutions we offer.

Our SkyLIGHT performance management—a software-centric solution for continuous performance monitoring of network performance—leverages Accedian’s patent portfolio to deliver highly accurate, granular, and ultra-scalable measurements and analysis.

Some of the components of the SkyLIGHT solution are:
  • SkyLIGHT management, control, and actuation software (SkyLIGHT Director, SkyLIGHT VCX) allows for centralized control of performance monitoring sessions and SkyLIGHT modules and can be installed on non-proprietary, commercial off-the-shelf (COTS) hardware.
  • SkyLIGHT modules (small form factor/SFP FPGA-based units that provide optional hardware assist for up-featuring customer equipment to the latest standards, ultra-accuracy, and advanced features) managed centrally by the VCX virtual machines located in a cloud data center. 
  • FlowMETER and FlowBROKER microservice agents within the SkyLIGHT platform, which provide existing analytics tools with statistics and sub-second user traffic details needed to debug links in real-time. 
Only SkyLIGHT performance monitoring offers:
  • One-way metrics without external synchronization.
  • A fully virtualized monitoring solution with NFV performance monitoring (NFV-PM) capabilities at full network scale.
  • 5G-grade, future-proofed precision.
  • Industry-leading granularity to capture short term impairments (5 sec vs. 15 min reporting).
  • More than 50 measurements and KPIs to provide the widest range of perspectives to detect and isolate elusive performance impairments.
  • Scalability to assure hundreds of millions of subscribers, report billions of metrics a day, deployed within weeks.

Thursday, November 30, 2017

What Makes an Environment Cloud-Native? Virtualization vs. Cloud Computing


The increasing attraction of virtualization and cloud computing technologies has pushed cloud- native environments further into the spotlight. Here, Michael Rezek, VP of Business Development and Strategic Partnerships at Accedian, explores the what, how and why of cloud-native.

What is a cloud-native environment?
MR: It starts with virtualization—the first step to cloud computing, which separates infrastructures through the use of software.

Once infrastructure is virtualized, it provides access to shared pools of configurable resources, such as servers, storage, or applications, which are provisioned with minimal management effort. Only after all component infrastructures in a system are virtualized does the environment truly become “cloud native.”

What characteristics must cloud-native environments possess?
MR: Simply having a virtualized infrastructure or virtualized software application does not equate to being cloud-native. According to the National Institute of Standards and Technology (NSIT), a cloud-native environment should possess all of the following characteristics:
  • On-demand service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service or pay-per-use model

Is ‘cloud-native’ merely a hyped-up concept, or a proven technology?
MR: There is a great deal of hype surrounding cloud computing and virtualization, and coincidentally cloud native environments, with many wondering about their legitimacy as proven technologies.

In some ways, cloud-native is in danger of becoming seemingly unattainable since even the smallest dependence on hardware can disqualify an application from being designated as such. However, it’s important to remember that virtualizing functions is a gradual process and won’t happen in one fell swoop.

Cloud-native infrastructures and cloud-native applications are proven technologies, with the former successfully deployed in data centres globally, while the latter are foundational to software-as-a-service (SaaS) models.

Cloud-native applications—or cloud-native virtual network functions (VNFs)—is where it gets a little complicated. While cloud-native VNFs do exist, their successful transition to a virtual network is still being proven. The challenge lies in the apps (VNFs) not necessarily being centralised in a data centre but instead spread across the network’s distributed “web” with numerous endpoints. This compromises the resource-pooling element of a cloud-native environment due to the sometimes limited pools and clusters of resources at remote points of the network edge, or at aggregation points.

What are some benefits benefits of cloud-native environments?

MR: Three benefits in particular stand out, and they’re all focused on networking and computing happening automatically:

  • Auto-provisioning – in a telecom use case, operators can leverage a cloud environment allowing customers to self-serve their applications without the need for additional resources to deploy new services.
  • Auto-scaling – operators do not have to pre-provision purpose-built networking equipment manually, but can instead use software orchestration to automatically spin up and tear down compute resources, according to customer demand.
  • Auto-redundancy – redundancy can be automated by leveraging pools and clusters of compute resources along with a redundancy policy
What are some challenges with cloud-native environments?

MR: To reap the significant benefits of cloud-native environments, operators must overcome challenges in several key areas:
  • Organizational: Historically, IT designed and managed the compute infrastructure, while network organisations were responsible for designing and operating the network. However, in a cloud-native environment, these two distinct domains must morph into one cohesive unit—leveraging each other’s skills, and cross-training one another to design and operate a holistic virtualized network environment.
  • Financial: Telecom companies have invested billions of dollars in purpose-built networking infrastructure. Migrating to a cloud-based physical infrastructure with software licences will therefore be a costly undertaking. Indeed, the cost may create a barrier for smaller operators wanting to migrate their infrastructures.
  • Network management: successfully managing a software-based network consisting of numerous VNFs is no easy feat. Operators will need to oversee the integration of open APIs to each VNF for management and control. The complexity of working with different standards in an open environment is especially difficult.
How would you sum up the future for cloud-native environments?

MR:
Communication service providers (CSPs) are under mounting pressure to transform their systems and infrastructures to become more agile, able to deliver services at a push of a button. Failure to do this will not only see them struggle to keep up with the competition, but potentially lead to their demise. For today’s telco, the answer lies in the cloud, and specifically in cloud- native environments—which, if implemented correctly, can boost network efficiency, reduce expenditure, and enhance quality of experience (QoE) and quality of service (QoS) for subscribers.


Tuesday, November 28, 2017

Report: Analytics Key to QoE for Complex Wireless Networks


Traditional wireless networks are not especially ‘smart’ or efficient, mostly serving to convey as much data as possible, without regard to importance of the service or app that data is tied to, noted Senza Fili in a recent report on analytics for big data and network complexity. But these networks and the traffic they carry are becoming more complex, so they must also become smarter and more efficient. Such a transformation is possible with analytics. 

“Network architectures continue to evolve, with the addition of Wi-Fi access, small cells and DAS, C-RAN, unlicensed access, carrier aggregation, VoLTE, virtualization, edge computing, network slicing, and eventually 5G. Managing networks that grow in size and complexity becomes difficult because there is a need to integrate new elements and technologies into the existing network in order to benefit from the technological advances,” explained Monica Paolini, founder and president of Senza Fili, and the report’s author in collaboration with RCR Wireless.

The solution is putting predictive analytics to work optimizing these networks, using automation paired with machine learning and artificial intelligence to extract and correlate valuable information from many data sources, generating insightful advice or predictions.

“The value analytics brings to optimization comes from expanding the range of data sources and taking a customer-centric, QoE-based approach to optimizing end-to-end network performance,” Paolini concluded. This give operators the ability to decide “which aspects of the QoE they want to give priority to, and surgically manage resources to do so,” rather than limiting optimization to throughput and selected KPIs like latency or dropped calls.

Focus on QoE
That ability to fine-tune traffic management is very valuable to operators, who are necessarily shifting to a quality of experience (QoE)-based model despite supply exceeding capacity in an environment with limited resources.

While operators may not be able to realistically give all users everything they want, Paolini said, they can still greatly improve the user experience using resources available, in a way that’s more fair and better aligned with what subscribers value most--for example, the quality of video calls taking higher priority than the ability to watch videos on YouTube and Netflix.

“Lowering latency across the board may be less effective in raising QoE than lowering it specifically for the applications that require low latency,” she explained. “The average latency may be the same in both cases, but the impact on QoE is different.”

Other advantages of this approach include the ability to:
  • Avoid over-provisioning parts of the network
  • Decide which KPIs carry the most weight for improving QoE
  • Determine the best way to allocate virtualized resources
  • Find root causes of network anomalies that result in QoE issues
  • Manage security threats
Toward Predictive Analytics
“The ultimate goal of analytics is to become able to predict the imminent emergence of an issue before it causes any disruption to the network,” Paolini stressed. Machine learning and artificial intelligence will make that possible, eventually.

For now, fitting analytics to each operator’s specific requirements involves making tradeoffs, most notably involving time (for how long, and at what time increments, data is collected) and depth (how macro or micro the data is).

“As operators move toward real time and closer to the subscriber, the volume of data that analytics tools have to crunch grows quickly, increasing the processing requirements, and hence the effort and cost,” Paolini pointed out. “But the reward is a more effective optimization.”

It’s more effective because it’s more targeted.

“Congestion or performance/coverage issues are likely to emerge at different places and times, but only in a small portion of the network…” and therefore “optimization has to selectively target these locations and not the entire network. And the lower the time resolution and the more precise the geolocation information, the more powerful the optimization can be,” Paolini concluded.

Adopting Analytics - Drivers
Operators are driven by several factors to adopt customer experience-focused analytics:
  • Cost and Services - Subscribers are more demanding and less willing to spend more. 
  • Usage - Subscribers use wireless networks more, and in new ways, resulting in a richer set of requirements.
  • Technology - 4G now and 5G in future benefit from more extensive and intensive use of analytics. 
A Cultural Shift
For operators, expanding the use of analytics is appealing but not without its challenges. The greatest of those “is likely to come from the cultural shift that analytics requires within the organization,” Paolini said in the report. “The combination of real-time operations and automation within an expanded analytics framework causes a loss of direct control over the network – the type of control that operators still have by manually optimizing the network. Giving up that level of control is necessary because the complexity of networks makes automation unavoidable.”

Yet, still, operators are increasingly committing to analytics because the benefits outweigh the challenges, enabling them to:

Improve support for existing services
  • Create new services
  • Customize service offerings
  • Optimize QoE for specific services and applications
  • Understand better what subscribers do, individually and within market segments
  • Implement network utilization and service management strategies that set them apart from competitors
Put another way, end-to-end network efficiency and service provisioning enabled by analytics result in significant financial benefits for an operator, by delivering:
  • Increased utilization of network resources
  • Lower per-valuable-bit cost
  • Lower operational costs
  • Better planning
  • Network slicing and edge computing
  • Better customer service and product offerings
  • Third-party revenues

Tuesday, November 7, 2017

Active Synthetic Network Monitoring: What It is and Why It Matters



When it comes to tracking and optimizing the performance of wireless networks and the services they support, what’s more important: passive monitoring or active (synthetic) monitoring? The short answer is that both play a role. However, given the increasing complexity of modern broadband wireless networks, and the direction in which they are evolving, it’s fair to say that active monitoring plays a more and more important role. As such, it’s important to understand how it compares to and complements passive monitoring, and why it matters.

Active monitoring simulates the network behavior of end-users and applications, monitoring this activity at regular intervals — as fast as thousands of times a second, if required — to determine metrics like availability or response time. It is a precise, targeted tool for performing real-time troubleshooting and optimization.

By contrast, passive monitoring analyzes existing traffic over time and reports on the results. It is best for predictive analysis using large volumes of data, identifying bandwidth abusers, setting traffic and bandwidth usage baselines, and long-term traffic analysis to mitigate security threats.

Why Active?

Cloudification, encryption, decentralization and SD-WAN have fractured the value of traditional, passive monitoring. As a result, blind spots are consuming both networks and services, with service providers losing sight of the majority of traffic.

Where visibility still exists, it’s insufficient. Compute and hardware-intensive passive solutions are slow to report, typically taking well over a minute to digest and produce metrics. But, today’s dominant traffic flows—software as-a-service (SaaS), web, social media, streaming media, and real-time communications—are dependent on significant volumes of transient sessions between servers and clients, cloud and apps.

Consider, for example, that 90% of all TCP sessions last less than 3 seconds, and consume less than 100 bytes each. It’s not surprising, then, that the majority of network downtime is from short term degradation, not sustained outages. Passively monitoring aggregate traffic, reported every few minutes, miss the vast majority of short term events that impact the services subscribers use the most.

Active monitoring takes a proactive approach, overcoming these visibility gaps, while delivering the enhanced precision, and the sub-second insight required to monitor and assure dynamic services. Active-synthetic monitoring is a lightweight, ubiquitous and standards-based approach that faithfully replicates application and network traffic with unrivaled precision and frequency. This creates a constant, controlled source of detailed metrics that accurately characterize quality of service and experience (QoS, QoE).

Active Monitoring Uses

Beyond its broad use as a controlled, targeted, QoE optimization tool, active monitoring is valuable for:


  • Introducing new services. VoLTE, Internet of Things (IoT), SaaS, over the top (OTT) and other digital services can be simulated and monitored, throughout their service lifecycle. Active test allows service providers to assess network readiness before deployment, and the impact new services have on other applications when they go live and begin to consume the network. 
  • Applying the benefits of virtualization to network and service assurance. Active monitoring is easily virtualized. When surveyed by Heavy Reading, service providers overwhelmingly pointed to active testing, and virtualized agents, as driving their quality assurance efforts and budgets. Passive probe appliances were the most likely to lose budget, after years of consuming significant capital expenditure. Most passive solutions are nearly 500% more expensive than the active solutions enabled by virtualization.
  • Enabling automated, software-defined networking (SDN) control. Active monitoring provides a complete, high definition view of end-to-end performance that service providers can use as real-time feedback for automated control, and with machine learning and artificial intelligence (AI) for root cause analysis, predictive trending, and business and customer journey analytics. Exceptionally granular, precise data with a wide diversity of statistical perspectives means analytics can converge and correlate multi-dimensional events an order of magnitude faster than coarse, passive monitoring data permits.
Breaking this down further, the advantages of active monitoring include:

  • Massive, multinational network monitoring scalability on lightweight, virtualized compute resources
  • Carrier-grade precision that enables undisputed service level agreement (SLA) reporting, independent of traffic load or location
  • Ability to resolve detailed, one-way measurements to microsecond precision
  • Ability to measure performance, QoS, and QoE at any location, physical or virtual, end-to-end
  • Tests can be targeted at known issues, locations or services on demand to accelerate troubleshooting
  • Streaming metrics tailored to machine learning, analytics, artificial intelligence (AI), automated SDN control and management and network orchestration (MANO)
  • Ability to proactively and predictively test services before they come online: VoLTE, IoT, business services, SaaS, and impact of OTT
  • Fully standards-based, and interoperable over multi-vendor, multi-domain infrastructure
  • Eliminates the need for taps, packet brokers or “SPAN” ports
  • Segments networks and services to allow rapid fault isolation, precise trending, and predictive forecasting
  • Proactive, in contrast to passive monitoring which is always “after the fact”
  • The ability to baseline and threshold using reliable and consistent data
  • Predictive mechanism to facilitate network improvements/adjustments based on subtle changes/symptoms vs. customer complaints
Passive monitoring still plays a role in managing and optimizing wireless networks, and always will. But, the complex nature of these networks today and tomorrow also demands the use of active monitoring for real-time, proactive, automated QoE optimization. Don’t leave home without it!

Tuesday, October 31, 2017

Accedian at MEF17: Technology Innovation Panel Speaker, Booth


Join us at MEF17 (going on November 13-16, 2017, in Orlando, Florida); our President and CEO, Patrick Ostiguy, is speaking on the November 14, 12:05pm Technology Innovation Panel. We're also exhibiting at booth 205. Here are the details!




Event: MEF17

Event Dates: November 13-16, 2017

Booth: 205

Panel Date: November 14, 2017

Location: Marina Bay Sands, Singapore

Panel: Technology Innovation Panel

Panel Time: 12:05pm

Panelist: Patrick Ostiguy, President & CEO

register.png     meetus.png



Monday, October 30, 2017

Real-time Network Monitoring: What Progress Have We Made Globally?



According to GSMA Intelligence, the number of mobile subscribers in the Asia Pacific region will rise from 2.7 billion at the end of 2016 to 3.1 billion by 2020. Countries driving this rapid growth are China and India.

This rapid increase in the number of mobile subscribers stresses the need for real-time network monitoring. As today’s mobile network operators (MNOs) increasingly rely on performance to differentiate themselves in an already crowded market, quality of service (QoS) and quality of experience (QoE) have never been so important.

Efficient LTE networks will become a major priority for mobile operators everywhere. Managing complex, high speed, and multipurpose networks is not the only challenge; operators must also prepare for new services such as connected cars and other Internet of Things (IoT) features. At the same time, operators also need to develop and put in place methods and practices that will support their imminent move to 4.5G, and ultimately 5G.

Faced with the task of navigating this complex landscape, it’s essential that mobile operators have a greater level of network visibility to help with management and troubleshooting. Accedian's solutions are helping mobile operators across the globe keep up with the growing demands of their networks. Here are a few examples of how we do it:


  • South Korea’s SK Telecom turned to our SkyLIGHT Performance Platform and Nano smart SFP Modules for a standards-based, network performance assurance solution capable of providing end-to-end QoS and QoE visibility across a sophisticated, multi-vendor mobile network. This enabled SK Telecom to drive its software-defined network (SDN) management system directly with the real-time metrics from our solution, automating network configuration and optimization to deliver the best possible experience to its customers.
  • India, one of the main drivers behind the rapid growth of mobile subscribers in Asia, is in position to capture the benefits of reliable virtualized real-time network assurance. Our SkyLIGHT Platform and Reliance's data analytics Jio Coverage Platform were used together to optimize network quality and user experience in real-time for approximately 100 million mobile subscribers in India. Reliance Jio used our SkyLIGHT solution to provide near complete visibility of its network. This resulted in being one of the best practice examples of providing total network visibility, with zero hardware dependency, around the globe.
  • Boasting a customer base of more than 341 million subscribers, Telef√≥nica’s requirements for a comprehensive performance assurance solution had to cover its global footprint and looked at ubiquitous coverage to localize issues, planning network upgrades, and optimizing performance, among many others. Real-time metrics covering network QoS, as well as voice and video and QoE, for trending, alerts and reporting had to be considered in order to address its highly asymmetrical network. Additionally, fast and easy instrumentation deployment with a virtualized, centralized control platform was also incorporated, and in the end, our solution, comprised of SkyLIGHT performance assurance platform, Nano smart SFP Modules, 1GbE and 10GbE Network Performance Elements, delivered the best possible QoS and reliability for Telef√≥nica’s customers.
In North America and Europe, where LTE is more established, operators are already moving toward virtualization generally--including virtualized instrumentation, which is less expensive to deploy and use than traditional, hardware-based methods.

Areas like Asia and Latin America are investing much more in performance monitoring and assurance for greenfield LTE deployments. However, in most of the developed and western world, 3G and earlier technologies continue to play a key role and so the need to solve significant and basic issues, like large-scale outages, is still present. The reality is that Asian and Latin American operators often have higher performance networks than their counterparts in Europe.

With the market becoming more digitized, active (synthetic) monitoring is starting to show its value. Active tests allow service providers to assess network readiness before deployment, avoiding the impact of new services on existing applications when they go live and begin to consume the network.

Therefore, by providing a comprehensive view of end-to-end performance, service providers can use this detailed “network state” as real-time feedback to automate SDN control, and use machine learning and artificial intelligence (AI) for root cause analysis, predictive trending, and business and customer journey analytics. Exceptionally granular, precise data with a wide diversity of statistical perspectives means analytics can converge and correlate multi-dimensional events an order of magnitude faster than coarse, passive monitoring data permits.





Tuesday, October 24, 2017

6 Truths About QoE Monitoring for Virtualized Networks


A third of mobile networks are not fully monitored end-to-end. Let that sink in for a minute.

It’s a surprising figure because most operators use three or more vendors for each major function in their radio, backhaul, and core networks. Being able to piece together a consistent monitoring layer end-to-end is necessary to manage quality of experience (QoE) over these discontinuous domains.


A lack of end-to-end visibility isn’t sustainable. Especially since network performance is rapidly becoming the key differentiator. As operators seek out and implement ways close their visibility gap, they are discovering the following six truths about managing QoE over next-generation networks.

1. Best-effort QoE assurance is no longer good enough


While best-effort QoE assurance has been the accepted standard for typical internet applications and services, that’s rapidly changing with the on-demand nature of usage today. Customers are no longer tolerant of services being “okay” rather than “excellent.” Even a bad experience with Netflix or YouTube can impact customers’ perception of network quality and lead to churn.

This truth translates into huge changes in terms of what’s required for real-time service instantiation, as well as service monitoring and assurance. Operators must now be able to perform per-application level quality assurance for a vast number of services and QoE requirements, as well as meet the need for automated processes in virtualized networks.

2. QoE management for virtualized networks is a whole new ballgame


The shift from traditional to virtualization networking significantly impacts how operators manage QoE. Traditional networking was, for the most part, static; thus, understanding the impact of Layer 2 and 3 issues on QoE was relatively straightforward. Virtualized networks, on the other hand, are very dynamic; thus, QoE optimization must be driven in real-time by emerging technologies like machine learning. Also, in such networks, the data plane is often split across various network slices (even for the same service), making it even more complex to understand service delivery.

Specifications for backhaul network metrics—such as throughput, latency, and availability—are becoming more stringent. As a result, traditional monitoring techniques may not be sufficient to see the types of performance issues that result in QoE problems.

In Accedian’s experience, we have seen cases like the one below where an operator was experiencing more than 100,000 call drops, with an obvious negative impact on QoE, even though classic quality of service (QoS) metrics (e.g. packet loss, delay) were well within spec. Only by analyzing millions of Layer 2 and 3 KPIs was a periodic relationship observed that led to discovery of the root cause. Without the ability to manage QoE in this way, the impact of such a situation would have been catastrophic to the operator. 

<click image to enlarge>

Source: Accedian slide from Heavy Reading webinar, “

Given the dynamic nature of virtualized networks, and the inability of traditional tools to assure virtualized services, it’s hardly surprising that operators are less confident in their ability to monitor QoE or the service experience.

3. Understanding the relationship between QoS and QoE is critical

Operators are still more comfortable monitoring QoS than QoE, a holdover from traditional telephony performance monitoring. The problem is, customer satisfaction is largely driven by QoE and not QoS. Being unable to monitor QoE puts operators’ business models at risk. And, as customers come to rely even more heavily on mobile connectivity as a way of life, QoE will become the competitive differentiator.

This is not to say that QoS and QoE are unrelated. Our work with South Korean operator SK Telecom, helping them develop a performance assurance strategy for their LTE-A network, highlighted some interesting relationships between QoS and QoE metrics. For example, it was found that even a small packet loss event—say, 0.1%—could lead to a 5% decrease in service throughput and a 2% loss could lead to as much as an 80% decrease in throughput. 

<click image to enlarge>


This type of surprising relationship is not limited to obvious issues like packet loss, either. In another example, it was shown that a 15ms latency increase on a critical path in the network could cause a throughput decrease as much as 50%.

In both cases, the biggest impact is seen in the application layer protocols managing service delivery, not the QoS monitoring. Monitoring only the resource and network layer is not enough.

Which leads to another inconvenient truth: with next-generation networks, there are simply too many services streaming through tens of thousands of network components to manage QoE manually; automation is required.

4. Next-gen performance assurance is beyond human control

Generating QoS metrics in a traditional network involves large, centralized test suites and network interface devices (NIDs) at the edge. It also requires a lot of engineering, planning, and provisioning work. Imagine that level of activity for a new network with 100,000+ eNodeB sites and 100,000+ aggregation and cell site routers. Clearly, the traditional networking approach won’t scale.

While operators are rolling out software-controlled, programmable networks—using intelligence to govern resource allocation as applications come online—they are still largely managing those networks in a way that’s unsustainably manual. The next step is to do all this in an automated way, using intelligent algorithms for a real-time view of network topology paths.

Automation enables operators to:
  • Relieve hotspots and avoid unnecessary upgrades, by balancing link utilization 
  • Ensure bandwidth is available when it’s needed, for each application
  • Satisfy customer bandwidth requests without negatively impacting other services
  • Determine the best links to place each new workload
5. Active, virtualized probes are the future of QoE assurance

Implementing software-defined networking (SDN) is a priority for mobile operators, and just about every operator now has a plan to virtualize at least part of their network—starting with the mobile core.

The reason for this trend is that existing network architecture is failing to enable the type of innovation operators need to make the foundation of their business strategies, especially as they roll out LTE-A networks and plan for 5G. They find it hard to create new revenue streams, because with proprietary hardware, management, and IT systems it’s difficult—and takes too long—to deploy new services.

But, this shift does require a new approach to QoE management.

In an effectively-managed virtualized network, the test suite becomes a virtual network function (VNF) that can be instantiated as needed, either to address a large geographic area or to scale compute power required for a large number of endpoints.

This setup does still require an endpoint to test toward, but that’s relatively straightforward given that most routers and base stations are now compliant with standardized test protocols such as Two-Way Active Measurement Protocol (TWAMP, RFC-5357). Sometimes there will be reasons to deploy purpose-built endpoint solutions (e.g. smart SFPs) when a common test standards are not supported, but in most networks this represents less than 10% of sites. The increasing popularity of x86 towards the mobile edge means that standards-based VNFs can be used as well.

What makes VNF-based performance monitoring even more compelling is the fact that either part of the equipment needed for service delivery is a software solution that can be easily orchestrated in a virtualized network; a QoE solution for a network of 100,000 eNodeBs can be deployed, up and generating KPIs in weeks, rather than several months, with very little capital required.

For all the reasons discussed so far, and given the economics of virtualized networks, mobile operators are moving away from centralized probe appliances and toward active virtualized probes. 

<click image to enlarge>


Source: Accedian slide from Heavy Reading webinar, 

6. Effective QoE management is possible, with analytics and automation

What is the best use for the potentially billions of KPIs generated daily by end-to-end network monitoring using active, virtualized probes? Big data analytics makes light work of this problem, allowing correlations between layers and events—through predictive trending and root cause determination—to be automated, and displayed in real-time.

Since the next generation of mobile networks requires automation to run optimally, having centralized network performance and QoE metrics in the operator’s big data infrastructure provides the necessary feedback for SDN control to make network configuration decisions that deliver the best experience to each customer, at any time.

That’s the ultimate goal for LTE-A and 5G: breathtaking performance over a highly dynamic, virtualized network, without humans getting in the way.



Tuesday, October 10, 2017

Virtualized Networks: What About Performance Assurance?


Around the globe, pretty much every telecom service provider is either running a network functions virtualization (NFV) proof-of-concept, or has already virtualized some areas of their network. In doing so, a major concern they have is how to effectively perform test and measurement (T&M) of these dynamic network environments which, if not properly assured, threaten carriers’ main competitive advantage: network reliability. Here, Accedian’s VP of International Sales, David Dial, explains what it takes to fully assure virtual networks, and how our solutions uniquely address this need.

What’s required to fully assure performance and user experience in a virtualized environment?


DD: Network performance assurance for virtualized networks goes far beyond traditional T&M. Essentially, what’s needed is an advanced operations support system (OSS) designed for an IP world, capable of providing real-time performance information at a very granular level to meet the challenges of quality of service (QoS) assurance in complex network environments.

That’s what Accedian provides in our software-based SkyLIGHT solution that covers Layer 2 and Layer 3 performance monitoring, service activation testing (SAT), bandwidth utilization monitoring, and distributed packet capture information—unified in a single platform for multi-service network environments.

How does the convergence of mobile and fixed networks affect performance assurance?


DD: As networks converge to serve wireless, business, and residential customers—and performance parameters for IP-based services become less tolerant of faults—QoS assurance is becoming increasingly challenging. It’s not viable to continue spending significant CapEx and OpEx for costly handheld test gear to attempt identifying network issues during scheduled maintenance windows. Nor is it cost-effective to continued placing hardware in a network that is increasingly more distributed and multi-vendor.

The pressure on service providers to provide a high quality customer experience is growing, and network quality is becoming a differentiator in ‘quality wars’ heating up around the globe, as well as a requirement to realize objectives for costly LTE investments.

The use of NFV need not be synonymous with loss of quality management. Accedian’s virtualized performance monitoring and assurance solution is typically placed in a service provider’s data center environment to provide microsecond-accuracy for TWAMP sessions created for different classes of service as well as different packet sizes. Further enhancing this setup, our “smart SFP” modules can originate test sessions in a distributed fashion and support a full or partial mesh quality testing environment depending on the operator’s QoS requirements.

The SkyLIGHT solution is deployed at scale by major Tier 1 operators globally, with several of them planning to move to a completely virtualized version of the platform.

What else makes Accedian’s approach to virtualized performance assurance unique?


DD: In head-to-head competition with traditional handheld test sets, as well as hardware-based solutions, SkyLIGHT has been proven by major Tier 1 operators to be superior in multi-service, NFV network environments—no degradation in performance metric accuracy!

In short, SkyLIGHT provides real-time network state visibility for NFV environments. It serves as the virtualized instrumentation feedback loop required for software defined-networking (SDN) control and NFV orchestration based on OSS policy.


Tuesday, October 3, 2017

Cable MSOs Leverage DOCSIS 3.1 and NFV for Performance-Assured Business Services


With the advent of DOCSIS 3.1 deployments starting to accelerate in the U.S. cable MSO market, the significance it brings to operators like Comcast and Cox—making it possible for them to deliver gigabit broadband using existing HFC infrastructures—is enormous. MSOs are leveraging the technology as a means of expanding up-market to deliver sophisticated business services to enterprises as well as enhancing their residential services.

DOCSIS 3.1 is “cheaper to deploy than all-fiber networks because it makes use of legacy infrastructure, and the technology vastly expands cable broadband capacity, making it easier to introduce new gigabit-speed services,” explained Light Reading Senior Editor Mari Silbey in an article outlining the effects of the technology over the past few years.

Further, a lot of what’s possible with DOCSIS 3.1 (more capacity and speed, expanded capabilities for managing bandwidth and delivering higher-bandwidth services) feeds into the virtualization trend that has come to the forefront of the MSOs’ technical arsenal. This ties directly into the need for sophisticated performance assurance required to meet commercial service level agreements (SLAs) and retain residential customers.

DOCSIS 3.1 and Service Assurance

To compete with telcos in the premium business services market requires MSOs to establish a reliable means of SLA-grade performance assurance. Further, DOCSIS 3.1-based GbE services need to be on par with fiber-based offerings as most businesses are more concerned about reliability and performance than pure bandwidth. Connectivity to the cloud, for software as-a-service (SaaS) applications, is operationally crucial for enterprises.

Committing to guaranteed uptime, bandwidth availability, and rapid mean time to repair (MTTR) are therefore prerequisites for MSO success in the enterprise market.

The conundrum facing MSOs is that current DOCSIS modems don’t offer integrated performance monitoring, service turn-up testing, operations and maintenance (OAM) demarcation, and other key features needed for business services delivery. These capabilities must somehow be added, in a way that has minimal impact to the cost of deploying such services.

Accedian has been extremely successful in working with the MSO market to use network functions virtualization (NFV) for delivery of network interface device functionality in a small, programmable device—enabling end-to-end OAM visibility, hardware-based demarcation, performance monitoring, troubleshooting, and service turn-up. This type of virtualized instrumentation simplifies and assures the full business services over DOCSIS service delivery lifecycle.

How Accedian Can Help
Accedian’s ant module—a small form factor, FPGA-based device that works directly with network elements to provide advanced performance assurance capabilities, centrally managed using the Accedian SkyLIGHT VCX controller software—fills the gap for MSOs looking to expand their service assurance and delivery capabilities. In addition to standard cable modem/ant module deployments, MSOs are now embracing what can be done by installing a virtualized version (software only) of the ant module onto a universal customer premises equipment (uCPE) x86 box, or by partnering with manufacturers to include the ant module functionality in their cable modems.

The latter option is finally possible because of the potential inclusion of Accedian’s agent in cable modem DOCSIS 3.1 silicon, rather than requiring the addition of ancillary hardware daughterboard(s) or SFP port(s). With the “room” now available in the DOCSIS 3.1 silicon, cable MSOs are encouraging modem manufacturers to revisit the inclusion of an Accedian agent in their hardware.

This opens up a large set of new opportunities in the commercial—and conceivably also residential—cable market. Further, embedding the Accedian agent into the cable modem and/or uCPE dovetails directly into the MSO trend toward adopting SD-WAN for business services delivery.

Beyond performance monitoring metrics like delay, delay variation, and packet loss, the latest generation of ant modules (and the ant agent) introduces advanced assurance capabilities like granular bandwidth utilization metering, remote packet capture, and generation of standards-based testing (e.g. RFC2544 SAT, Y.1731 performance monitoring) between sites/branch locations. These capabilities deliver tremendous OPEX savings for MSOs, as well as the ability to differentiate their products and upsell additional services.



Monday, September 25, 2017

Accedian at INCOMPAS Show 2017: Booth 5


Join us at the Incompas Show, a networking and education event for communications technology professionals. It's happening October 15-17, 2017, at the Marriott Marquis in San Francisco, California. We're exhibiting our virtualized network performance assurance solution, SkyLIGHT, at booth 5. 

<click image to enlarge>



register.png     meetus.png



How to Become a Digital Service Provider in Four Easy Steps


Disruption in the mobile telecom industry is driving communication service providers (CSPs) to transform themselves into digital service providers (DSP). In embarking on this transformation, CSPs have three main goals in mind:
  1. Maintain and grow the subscriber base by extending lifetime value to customers
  2. Grow revenue through new business opportunities
  3. Drive out excess costs and inefficiencies, through operational excellence
Such a transformation requires agility in several key areas: network (infrastructure), services (including operational support systems/OSS), and customer experience (including business support systems/BSS). We will explore each of these in turn below, but there’s a prerequisite step to consider as well: learning from failure.

1. Learn from Failure

Thus far, digital transformation efforts on the part of CSPs do not have a stellar track record. TMF Forum reported that 60-70% of such programs fail and that 54% of CSPs say previous attempts at digital transformation have not succeeded.

<click image to enlarge>


A key reason for these failures is lack of ubiquitous, real-time insight into performance and experience. Such insight is possible, however. Rather than than be discouraged, CSPs should learn from past mistakes and move forward in a strong position to succeed using what they know now.

2. Transform Infrastructure for Network Agility

Creating a flexible and open infrastructure fosters competition and innovation from a broad vendor ecosystem, and enables the development and deployment of new revenue-generating services. This type of API-driven network, built in a way to enable flexible slicing and interconnect, benefits both the operator and its partners.

3. Transform OSS for Service Agility


Operators support systems (OSS) must evolve to enable rapid, dynamic service creation, provisioning, activation, and retirement of services. A service-oriented, dynamic software defined network (SDN) benefits the CSP through faster time to market for new services, and ability to react more quickly to market and competitive pressures. It also accelerates time to revenue.

4. Transform BSS for Customer Agility

To ensure subscribers get what they want, when they want it, and even before they realize want it, business support systems (BSS) must support an end-to-end, customer-centric approach using predictive analytics.


Bringing It All Together

Steps 2-4 above are inter-related aspects of network and IT transformation, and cannot really be addressed in isolation. They all require responsiveness to partners using the network through APIs; the operator’s need to rapidly scale out existing services and deploy new ones; and changes to user behavior, experience, or demand. All of this must be done without compromising the user experience.

Such a tall, complex order is possible—within a dynamically orchestrated system that has visibility into all functions, relationships, and behaviors of the network, it services, and its users. This vantage point provides the feedback needed for a closed-loop automation process between service creation and its management and optimization.

<click image to enlarge>



Agility is the key to all methods of understanding service, network, and user behaviors. Several requirements are worth noting here:

  • Lightweight, virtualized, monitoring instrumentation with immediate affinity to services as they are created. 
  • Fail-fast development methods--employed from the first beta--to provide critical feedback into the app experience as agile development adjusts the service until it’s ready for wide-scale deployment. 
  • OSS/SDN architecture that adapts to the network--in ongoing operation mode--to maintain optimal performance and circumvent problems, in context. 
  • At the BSS layer, use of data as input to analytics, as a means of improving customers engagement, experience, and service quality. Result: customer-centric insight drives network provisioning changes and new service availability. 
When these requirements are met, DSPs can find new revenue through a variety of opportunities, such as:
  • Mobile edge computing (MEC) as-a-service
  • Infrastructure-as-a-service (IaaS)
  • Distributed edge cloud
  • Slicing
  • Mobile payments
  • API-enabled network and resource consumption
  • IoT across federated mobile/cloud networks
  • Telehealth 
By positioning their network for cloud-type consumption, mobile operators stand to gain—for the first time—from key over-the-top (OTT) applications traversing their network. In facilitating this interaction with ‘subscribers’ (IoT service providers, content and transactional applications, etc.), MNOs become DSPs, turning their unique assets into an enviable “mobile cloud.”



Wednesday, September 13, 2017

Accedian at Network Virtualization & SDN Asia: Panel Speaker, Booth


Join us at Network Virtualization and SDN Asia (going on October 3-4, 2017, in Singapore); our VP Business Development APAC, Jason Roberts, is speaking on the October 4, 9:40am panel, "How Are Operators Effectively Monetizing SDN & NFV Services?" We're also exhibiting at booth NFV26. Here are the details!






Event: Network Virtualization and SDN Asia

Event Dates: October 3-4, 2017

Booth: NFV26

Panel Date: October 4, 2017

Location: Marina Bay Sands, Singapore

Panel: How Are Operators Effectively Monetizing SDN & NFV Services?

Panel Time: 9:40am

Panelist: Jason Roberts, VP Business Development APAC

register.png     meetus.png

Tuesday, September 12, 2017

Cable MSOs: How to Succeed With SD-WAN Using Virtualized Service Assurance



Cable MSOs hoping to effectively target enterprise customers must find a way to establish a nationwide service area, which means augmenting their own DOCSIS and Carrier Ethernet infrastructure with third-party access to extend their footprint. Increasingly, MSOs are adopting software-defined WAN (SD-WAN) to reach on- and off-net sites, in a uniform way over any access media. But, there’s a pitfall: SD-WAN appliances don’t offer standards-based test, turn up, and monitoring functions required to offer service level agreement (SLA)-grade services.

Instead, SD-WAN solutions use proprietary monitoring and reporting methods, which don’t interoperate with existing network equipment. Problem is, SD-WAN may only be required in certain enterprise customer locations, so any implementation has to interact seamlessly with traditional service delivery methods, which means using standards-based techniques.

How to address this issue?

Virtualized test probes and test reflectors can cost-efficiently replicate network interface device (NID) functionality, bringing the needed turn-up testing, monitoring and operations & maintenance (OAM) functions to SD-WAN endpoints. Virtualized instrumentation uplifts SD-WAN with carrier-grade functionality, making it interoperate with existing network infrastructure, operations procedures, and support systems.

Assuring the SD-WAN Service Lifecycle

The SD-WAN service lifecycle has three main phases, consistent with the MEF’s established model for Carrier Ethernet connectivity:

  1. Deployment: provisioning and service activation testing (SAT)
  2. Performance monitoring and SLA reporting: collecting and presenting key performance metrics
  3. Troubleshooting: techniques to identify, isolate, and troubleshoot service issues


<click image to enlarge>



Metro Ethernet Forum Service Lifecycle 


Approaches to Virtualized Performance Assurance

To effectively assure all of these phases, MSOs may choose to use one of two approaches:
  1. Centralized performance monitoring architecture using virtualized performance assurance controller (vPAC) virtual network functions (VNFs) as probe generators, with a lightweight, stand-alone software agent that instruments the entire network in a software-only implementation. 
  2. Network-embedded architecture that employs small footprint, programmable performance assurance hardware modules (vCPE modules) augmented by virtualized performance assurance functions hosted on a centralized vPAC. 

The first option, because it’s software-only, is less precise and has a smaller feature-set than that offered by vCPE modules. However, it is well-suited for deployments where performance assurance using standard-based protocols is needed, but the added-benefits offered by the NFV-enabled modules are not required.

To facilitate integration with existing operational support systems (OSS), network management systems (NMS), and VNF orchestrators, either approach requires four key elements:
  1. A test session controller
  2. A test packet generator
  3. A test packet reflector or receiver
  4. Precision timestamping
Each approach is discussed in more detail below.

Software-Only Virtualized Performance Assurance

This option provides unprecedented deployment speed and agility, through its ability to remotely and centrally deploy, configure, and run everything needed to instrument an existing network, on-demand, with minimal expense. Standards-based monitoring methods integrate the network itself into a ubiquitous instrumentation layer. With this visibility centralized in data centers shared with SDN control and big data analytics, providers have an integrated foundation to deliver a new level of customer experience.

Here’s how it works:
  • The vPAC assumes all session setup, control, and sequencing functions, as well as results analysis and reporting to file servers. vPAC instances (manifested as VNFs) are deployed and orchestrated seamlessly with the network service descriptors, allowing fully-automated setup and assurance of virtual service chains.
  • The lightweight software agent VNF has two functions: 
    1. Offers reflection capabilities, instrumenting the network with any orchestrator while easily running unprivileged on any Linux based operating system.
    2. Enables bi-directional measurements, unrivaled metrics set, measurement granularity, and third party interoperability—features unavailable when using built-in standard open-source tools (such as ICMP ping) or even proprietary measurement methods offered by SD-WAN vendors.
Enhanced Performance Assurance with NFV-Powered vCPE Modules

This option basically consists of pairing a centralized vPAC with network-embedded vCPE hardware modules, in order to virtualize as many customer-located networking functions as possible while retaining minimum hardware needed for service delivery, consistent with performance, reliability, and quality of experience (QoE) expectations. As noted earlier, this offers more precision and a larger feature set than a software-only implementation. Yet, compared with traditional hardware-based approaches, instrumenting a network in this way is a very affordable, fast to deploy option.

An example of this vCPE strategy is illustrated below in comparison side-by-side with traditional CPE; here, local networking functionality (e.g. firewall, PBX, routing) is virtualized to software-based VNFs, hosted on low-cost commercial off-the-shelf (COTS) servers or cloud infrastructure

<click image to enlarge>

vCPE: Traditional vs. Virtualized Customer Premises Equipment Example


In the context of SD-WAN, this approach can be used to introduce customer premises-located performance monitoring, turn-up test, service OAM (SOAM) and troubleshooting functionality, which—in the case of fiber business services—is normally provided using a NID. Reducing hardware appliances required at the branch site is a key benefit of SD-WAN; where installing a traditional standard NID along with the SD-WAN appliance may not be a feasible CPE option.

NFV-powered hardware modules can offer the same level of performance monitoring precision, as well as loopback and full line-rate turn-up test capabilities at a fraction of the cost of a NID, making this approach an economically viable fit when deploying SLA-grade business services over SD-WAN.

Conclusion

Whether deployed as software-only or using vCPE modules, all SD-WAN lifecycle phases can benefit from a flexible, NFV-based performance monitoring solution that scales beyond the footprint of the SD-WAN cloud and is capable of sending performance flows from any starting location to any destination in the network infrastructure.

Such a solution can be used to:
  • Cover large scale hub-spoke and full-mesh topologies with active, microsecond accurate, standards-based performance monitoring towards thousands of endpoints continuously.
  • Bring standards-based turn-up testing, monitoring, and OAM functions to all SD-WAN endpoints, by adding NFV-enabled vCPE modules or orchestratable lightweight software agents. Since the solution is standards-based, standard networking devices can also act as responders to performance monitoring flows.
  • Monitor micro-outages, one-way delay & variation, and SLA compliance by delivering precise and granular metrics.
  • Centralize test control and automation, integrated with existing OSS, by pairing vPACs with NMS solutions.
  • Deliver a new level of performance monitoring (PM) workflow automation with results centrally stored for comparison to predefined QoS templates or SLA levels. Tests—conducted one-way or bi-directionally, in an end-to-end or segmented manner—can be scheduled on demand or triggered by service endpoint installation.
  • Provide open access to turn-up data and results—including customer-ready reports reflecting their specific SLAs—using the API. 
All of these applications support MSOs’ goal of delivering SD-WAN managed services to enterprises, over large, diverse geographic areas with the same level of quality as they do with traditional WAN offerings.