• For Specialists

    A blog for service providers focused on QoS, QoE, and network performance. Join us for in-depth analysis of industry news, technology, and solutions driving performance in mobile networks, cable MSO business services, cloud and data center connectivity, enterprise WANs, and financial networks.

  • Join us Live

    We host webinars each month covering topics from solution design to performance assurance technology and demos of our latest innovations. Join us engineers online for tailored insight and Q&A with our network engineers.
    Upcoming Webinars:

    Click Here
  • Learn on YouTube

    Accedian is the Performance Assurance Specialist for mobile networks, enterprise to data center connectivity, and service provider SDN. With dozens of videos covering network performance and QoE, our YouTube channel is a unique training resource.

    Watch Now

Tuesday, January 23, 2018

How Will Cable MSOs Assure Their MVNO Wireless Services?


Cable multiple system operators (MSOs) in the U.S. are evolving their business models yet again by rolling out their own wireless services. Take Comcast NBCUniversal, for example: the company, which once was predominantly a cable TV provider but now does the bulk of its business through broadband internet access, last April launched Xfinity Mobile, a mobile virtual network operator (MVNO) subsidiary that delivers wireless access through a combination of their own wi-fi hotspots and Verizon’s network. Charter Communications has plans to launch a similar offering of its own in 2018, leveraging an MVNO deal it has with Verizon.

Even if their intention is not going head-to-head with mobile network operators (MNOs), at least at first, MSOs in the wireless space nonetheless face significant competitive and customer satisfaction pressures. Making such ventures successful requires forging partnerships that once would have been unthinkable (like Comcast and Charter Communications teaming up to expand both their mobile coverage areas) and using a variety of access technologies, including 3.5 GHz unlicensed LTE spectrum, Wi-Fi (which Comcast used as a marketing point in its Xfinity Mobile launch announcement) and small cells.

Cable MSOs running MVNO business units must first and foremost ensure they’re meeting subscriber expectations. One way to do that is apply artificial intelligence for automated service assurance as they are now doing with broadband internet. Even more traditional methods of performance monitoring, though, requires a uniform method of managing the customer experience across a diversity of access technologies. To be able to act on service disruptions and manage quality of experience (QoE) requires accurate granular visibility that is agnostic to specific vendors, topologies, and access networks. Minute delays, microbursts, and micro-losses can have a profound impact on the customer experience.

As Accedian has helped Cox Communications with for their cable broadband and MVNO services (using customer premises equipment/CPE), the only really reliable method of achieving end-to-end service assurance nowadays is to add a consistent instrumentation layer that essentially resides above interoperability issues on multi-vendor networks. Such instrumentation, which can be mostly or totally virtualized depending on the application, works just as well for MVNO infrastructure as it does for DOCSIS-based cable offerings and traditional mobile networks. 


Thursday, January 18, 2018

6 Predictions for the Future of Network Communications in 2018


The future can never be predicted with certainty, of course. But every year we like to try. So, here goes for 2018. 

1. CSPs will strive to own the entire end-to-end digital experience of their end-users

As the boundary between communications service provider (CSP) infrastructure and IT vanishes, operators wishing to stay competitive will strive to delight their customers by not only offering the best performing network, but also by owning and assuring the best digital experience of the end-user journey from initial subscription to end-to-end application performance—whether cloud, streaming, or simply web-based—on any device they use.

Related to this, the repeal of net neutrality in the U.S. may lead to more investment in traffic shaping infrastructure. If cable providers respond by requiring services like Netflix to adequately cover their network costs, choke-points will increase at the access network ingress, causing service level agreement (SLA) compliance to become more important. If they can be sure of their ability to comply with SLAs, CSPs can differentially offer new packages and/or increase prices for access to specific services.

2. CSPs will strive to help enterprises on their digital transformation journey

In 2018, CSPs striving to get away from the dumb-pipe commodity stigma, will start offering “digital transformation assurance-as-a-service.” The purpose of such offerings is to help enterprises transition on-premises corporate apps to the cloud (whether private or public) by assuring the performance of these apps before, during and after such transition. In other words, CSPs will strive to not only be the pipe to the cloud, but also the success-assuring gateway.

3. Revenue challenges will significantly change operator business models

The loss of revenue from voice services (thanks to availability of free Wi-Fi and VoIP) and the escalating demand for data streaming are forcing Tier 1 mobile operators to run their businesses differently. The Internet of Things (IoT) is playing a big role in this dynamic, too. Because the network provisioning costs to offer a low-revenue IoT data subscription are similar to the costs of a $100 per month subscription service, CSPs must fundamentally restructure operating costs to match the revenue opportunity presented by IoT—which essentially comes down to using scarce network resources in the most efficient way possible against any particular revenue opportunity.

During 2018, these already-in-progress, revenue-related changes will accelerate. CSPs will look for new sources of revenue from IoT and related services that can be delivered using a lower-cost (OpEx) but better-performing network. Accedian can help by ensuring they get the most out of their existing networks, and prepare for 5G, with cutting-edge but lightweight and affordable performance assurance technology.

4. Virtualized/hybrid SD-WAN will play a significant role in CSP strategies 


In 2018, several CSPs will deploy so-called “hybrid SD-WAN” or “virtualized SD-WAN”, where they will leverage a combination of SD-WAN with their existing Layer-2 access infrastructure to maximize footprint and reach, pool compute resources at mini-datacenters instead of at every customer premises equipment (CPE) location, and offer Layer-2 services out-of-franchise. This will yield lower CapEx and OpEx, improved reliability, and faster SD-WAN install.

5. “Early 5G” will arrive… whatever that means

According to general doctrine, 5G requires a new standalone RAN specification, and use of millimeter wave spectrum (30-300 GHz). But “early 5G”, expected to reach commercial viability over the next year or so probably won’t hit both, or even either, of those requirements. So is “early 5G” really 5G? Is it just 4.5G dressed up as 5G? Maybe. Does it really matter? True 5G is a revolutionary step up to a new kind of mobile, but getting there inevitably is an incremental process.

What can new or upgraded networks and services do? That’s what really matters; the actual technology used to get there is a moving target. 5G isn’t really one “thing” anyway, but instead an increasingly complex root system that must be carefully and intelligently managed to keep it healthy. During 2018, getting as close to gigabit LTE as possible, and reducing latency, remain the big goals for the evolution of mobile. We’ll see significant progress on both fronts this year.

6. Edge computing will earn its rightful place as vital to 5G

As FierceMarkets pointed out, without edge computing 5G is merely a glorified version of 4G, using more spectrum to deliver more bandwidth. True 5G, you could argue, requires fundamental network architecture changes to achieve—among other things—significantly lower latency. 2018 will bring more mobile network and enterprise network edge deployments. Whether you define these as actually being part of 5G yet, it’s a step in the right direction.

As the number of endpoints in carrier networks multiply, CSPs will need very cost-effective solutions to manage network quality. Those solutions need to capitalize on the push to edge computing. Beyond the access considerations, moving compute loads (network functions/slices) through the edge computing network will need to be targeted and dynamically location optimized to ensure required delay characteristics. Players with small footprint, virtual plays are likely to be successful in helping operators with this challenge.


Tuesday, January 16, 2018

Oh no! SD-WAN means the return of vendor lock?! Help!


In the traditional world of hardware routing, software-defined networking (SDN) and network functions virtualization (NFV) are attractive because they theoretically free operators and enterprises from the constraints of proprietary infrastructure (hardware and software). SDN opens up the control plane, making it possible to cast aside the handcuffs of vendor lock. 

Or does it?

On the access side at the customer premises, operators face the significant challenge of orchestrating all the many virtual network functions (VNFs) involved with virtualized networking. Amidst this confusing new landscape, lack of defined standards to which software vendors can write code, and the struggle to bring IT and network operations organizations together, software-defined WAN (SD-WAN)—which essentially refers to SDN technology applied to enterprise WAN networks—has become the first meaningful step toward software-based network automation. It provides a self-contained, orchestrated environment for these virtualized network functions.

Potential benefits of SD-WAN include the opportunity for operators and service providers to:
  • simplify WANs and positively refactor managed service business models
  • leverage favorable economics of commodity broadband in a hybrid WAN network
  • more easily use combos of private MPLS, broadband, or LTE
  • put integrated intelligence policy engines to work dynamically optimizing cloud app connectivity
But.

With SD-WAN, vendor lock returns on steroids. Each vendor offers its own proprietary SD-WAN controller and management plane as well as edge devices. Oh, the irony. Operators and enterprises are adopting SD-WAN out of need for speed and agility, to reduce OpEx, and to create new revenue opportunities. Hitting all three points requires SD-WAN solutions that integrate with existing systems, for easy management and fault detection. This has led back to vendor-specific solutions. In striving to get away from vendor lock, operators (and the enterprises they serve) find themselves shackled even more heavily than before.

For operators selling managed services, using SD-WAN may offer short term operational advantages, but with the inflexibility of no multi-vendor interoperability, it will threaten the operators with higher costs in the longer term. Why? Once an installed base of a given SD-WAN vendor is deployed, there is no way to introduce a second SD-WAN vendor edge device without deploying the full centralized control and management planes. This is actually not the case today with traditional routing technology. If an operator starts with one vendor and later wants to change to another, or combine more than one, it means refactoring all the operational IT infrastructure. Management, monitoring, control, and other functions are different for each vendor’s technology.

Beyond centralized controllers, performance monitoring and assurance is another challenge with multi-vendor SD-WAN environments. It’s true that each vendor uses software algorithms to measure one link’s performance relative to others, steer applications based on priority, and offer network measurements from end to end. However, SD-WAN is fundamentally an overlay solution by its very nature. Therefore, the underlay network performance cannot be measured with the great accuracy and granularity. Further, no underlay network infrastructure segmentation is available to pinpoint network issues and reduce mean time to repair (MTTR); the overlay network is blind to these.

This is where Accedian comes in. Our SkyLIGHT solution provides a uniform monitoring and assurance platform across the entire SD-WAN infrastructure. It now includes application-aware monitoring, giving full visibility into multi-vendor SD-WAN environments.

So, at least when it comes to monitoring and assurance, there is light at the end of the SD-WAN vendor lock tunnel.

Thursday, January 11, 2018

Economist: Mobile Operators Feel the Pinch of Market, Tech, and Regulatory Changes


As of June 2017, there were 7 billion mobile subscribers, representing two-thirds of the world’s population, Economist said in the Telecoms section of its Industries in 2018 report, citing GSMA data.

Let that sink in for a minute. 7 billion. And that’s just people. Let alone Internet of Things (IoT) devices.

You know what comes next… this is both an opportunity and challenge for telecom operators and service providers.

“Resulting financial strains … may prompt a rethink of strategy and investment priorities,” Economist said in its report.

Half Full or Half Empty?
In response to subscriber and data usage growth, Economist expects that during 2018 operators will focus on expanding their 4G coverage in developing markets (like India and Sub-Saharan Africa), and improving reliability in more developed regions.

Overall, telecom will continue to be a buyer’s market, with consumers generally enjoying “a range of cheap data-rich packages.” Operators, however, are feeling the pinch of tight margins, as the “insatiable appetite for mobile connectivity” forces them to make large capital expenditures even as competition forces prices down, resulting in lower average revenue per user (ARPU). Economist predicts that over the next year mobile operator ARPU will fall by 2.3%.

It doesn’t help that the boundary between telecom and IT is blurring, and telecoms are “vulnerable to takeovers from internet players such as Facebook and Google.” Market competition forces really are different now, and will continue to change.

“The days when operators could rely on revenue from a reliable voice and SMS service are long gone,” Economist said in its report. “Competition from over-the-top (OTT) providers such as WhatsApp, Skype and Netflix has backed the telecoms sector into a corner. Now it faces a new challenge from app developers, whose business interests are expanding rapidly.”

Overall, Economist said market pressures mean telecom companies will struggle with revenue challenges, even as subscriber numbers continue to grow.

During 2018, Economist expects “total telecoms revenue in the 60 biggest markets to fall by 2% … This will largely reflect a 3% rise in telecoms investment as operators spend money on connectivity, which they hope will pay off in the longer term.”

Adapting to Change
How can and should telecom operators adapt? Economist suggests three strategies:

  1. Offer new, differentiated OTT services
  2. Enable a wider range of mobile applications
  3. Build greater flexibility into backhaul infrastructure using software-defined networking (SDN) and network functions virtualization (NFV)

On the bright side, continued IoT growth will create new revenue streams for some telecom companies.

Regulating Change
When Economist’s report went to press, the U.S. Federal Communications Commission hadn’t yet repealed Net Neutrality, but topic did get a mention.

“Were net neutrality to be overturned, it could allow companies such as Verizon and AT&T to reassert their dominance in a market that is already narrowing,” Economist noted.

Meanwhile in Europe, “the elimination of EU roaming charges will bite further into the margins of telecoms companies in 2018.” And, contrasting strongly with the U.S., it looks unlikely that the EU’s competition commissioner, Margrethe Vestager, will ease up on merger and acquisition scrutiny, nor will the European Parliament pursue deregulation, anytime soon, Economist said.

In October, the parliament acknowledged that operators need encouragement to invest in 5G, but “limited the regulatory benefits enjoyed by operators that team up to deliver next-generation connectivity,” and voted that regulators should be “given greater powers to tackle ‘joint dominance’ and oligopolistic behaviour.”

Developing markets are yet another landscape for regulatory forces. Rapid market growth creates intense competition, and sometimes that forces regulators to intervene, Economist noted.

Two examples cited in the report:

  • In Mexico, America Movil’s takeover of Telmex won it two-thirds of the mobile market
  • In India, Reliance Jio has forced competitor’s hands with free and low-cost packages
Other forces at work affecting the telecom market, mentioned in the Economist report, include continuing development of artificial intelligence (AI) technology, struggles around cyber-security, and the availability (or lack thereof) of investment capital for new and upgraded telecom networks.


Wednesday, January 10, 2018

SD-WAN Will Really Take Off When It Gets Its Act Together


What’s the appeal of SD-WAN?

Here’s how Metro Ethernet Forum (MEF) frames the market forces and problems that SD-WAN is touted as addressing, in its white paper Understanding SD-WAN Managed Services:

The internet has become a global fabric that connects people and machines—an on-demand, real-time set of programmable systems. It now must provide more value than just bandwidth. Particularly noteworthy is the way apps have moved to cloud, becoming “an IT utility for a globalized and mobile workforce,” MEF said in its paper. (Reproduced with permission of the MEF Forum.)

But, it takes too long to get new interconnect sites and clouds (public or private) up and running. To get all sites of a multinational business connected can take many months, for example. Even simple changes like bandwidth adjustments can take weeks.

SD-WAN has the potential to solve these and other problems—bringing value for telcos, ISPs, MSOs, cloud providers, managed service providers, and SD-WAN tech providers. SD-WAN adoption is driven by the need to speed up service provisioning, scale the network on-demand, adjust to market dynamics, dynamically tailor apps, and reduce network connectivity service costs.

What’s holding SD-WAN back?

However, cautioned MEF in its paper, SD-WAN suffers from a lack of standardization around deployment, architecture, and APIs. Even the terminology is inconsistent. Definitions abound, but for MEF, an SD-WAN is an IP-based, secure overlay network that can operate over any type of access network. Other fundamental tenets of SD-WAN, the organization said in its paper, include:
  • Ability to service-assure each ‘tunnel’ in real-time
  • Customer premises, application-driven packet forwarding up to OSI Layer 7
  • Packet forwarding over multiple WANs at each site
  • Policy-based packet forwarding
  • Automatic, centralized configuration of customer premises equipment
  • Use of WAN optimization
Other deployment challenges for SD-WAN include inadequate OSS/BSS systems, lack of integration with legacy infrastructure, insufficient or incomplete standards, and too-high deployment costs.

But, that’s not all. Communications service providers (CSPs) and managed service providers (MSPs) may not be fully committed to adopting SD-WAN, in some cases because they see it as a threat to existing services like MPLS-based VPNs. That’s becoming less of an issue, but is still a factor; 45% see it as an opportunity, 37% see it as an opportunity and a threat, and only 4% see it as solely a threat, according to a MEF survey cited in its paper.

For MEF, SD-WAN managed services are one use case for its Third Network vision of connectivity that’s agile, assured, and orchestrated using standard, open LSO APIs. As the reality of SD-WAN gets closer to that vision, it should prove even more valuable and lucrative for service providers and network operators.

Related

Thursday, January 4, 2018

Two Ways DOCSIS 3.1 is Changing the Cable MSO Market


Now that DOCSIS 3.1 has been out for a few years, it’s becoming more clear how this latest-generation cable modem specification is changing the multiple-system operator (MSO) market. Mainly, of course, the technology allows cable MSOs to to deliver gigabit broadband using existing hybrid fiber-coaxial (HFC) infrastructure with relatively minimal upgrade investment. Here are two ways that’s playing out in the marketplace.

1. New capabilities lead to new business opportunities
Oddly enough, enthusiasm around DOCSIS 3.1 is more muted than it was for 3.0, given that it’s a major upgrade involving changes that enable 10 times increase in bandwidth. Perhaps not with that much fanfare, however, this capability is nonetheless allowing cable MSOs to expand up-market with sophisticated business and residential services. Support for up to 10 Gbit/s downstream and 1 Gbit/s upstream open up some some pretty interesting business opportunities, such as cloud-based services, enterprise software-as-a-service (SaaS), high-bandwidth branch access to data centers, and over-the-top (OTT) managed services to remote locations.

And that’s not the only benefit touted by founding developer CableLabs, which also include the ability to transmit up to 50 percent more data over the same spectrum on existing HFC networks, and increased modem energy efficiency using advanced management protocols. Together, these and other features have the potential to help cable MSOs position themselves as providers of choice for high-speed internet connections and applications.

The effect of these capabilities will only deepen when a ‘full duplex’ version of the specification—announced last February and still in innovation-project mode—is eventually released. That upgrade allow use of the full cable plant spectrum upstream and downstream simultaneously.

2. QoE features increase the need for service monitoring and assurance

Another notable feature of DOCSIS 3.1 is its use of software-defined (SD-WAN), paired with software-defined networking (SDN) and new quality of service (QoS) capabilities—most notably, its use of active queue management to reduce delay and improve responsiveness for bandwidth-intensive, latency-sensitive applications. These enable new cloud- and software-based managed service opportunities. For example, firewall-secured branch internet connectivity to a public cloud can reduce cost and accelerate performance to corporate data centers. Such services are also the final straw to poach entry-level fiber-based services and legacy MPLS offerings and make MSOs a bigger competitor in the premium business services market.

The trick is, MSOs must be able to commit to guaranteed uptime, bandwidth availability, and rapid mean time to repair (MTTR) if they hope to succeed in the enterprise market. Sophisticated performance assurance visibility is necessary to meet business service level agreements (SLAs), and make the most of the QoS specification. Because cloud connectivity and software-as-a-service applications are operationally crucial, businesses are more concerned about performance and reliability than pure bandwidth. Therefore GbE services must be on par with fiber-based offerings.

But, as great as DOCSIS 3.1 is, cable modems still don’t offer integrated performance monitoring, service turn-up testing, and operations and maintenance (OAM) demarcation. Today, services require a multi-box solution to deliver network services and service OAM (SOAM). Such features must somehow be added, without also piling on a burdensome level of CapEx and OpEx to the equation.

Accedian can help in this area; we’ve been very successful working with MSOs on the use of network functions virtualization (NFV) to deliver network interface device (NID) functionality in a small, programmable module that adds the missing assurance features mentioned above. It’s a lightweight, quickly deployed way to assure the full business services over DOCSIS (BSoD) lifecycle.

Tuesday, December 5, 2017

3 Telecom Use Cases for Virtualization


In many markets, virtualization—transforming separate hardware components into software functions—is being pushed for and seems highly desirable. In the telecom industry, a key driver for this transformation is that it promises to help operators reuse existing assets to be more flexible and agile in markets that are often very competitive. But what virtualization use cases actually make sense for telecom operators? And how can they succeed with these use cases? Here, we look briefly at three possibilities.

1. Prepare for 5G

The foundational concept of 5G is to make it possible to deliver any type of telecom service, anywhere. Virtualization makes that possible by enabling the instantiation of services and applications using software running on commercial off-the-shelf (COTS) compute power (e.g. x86) rather than specialized hardware.

The flexibility virtualization introduces is an important change because existing network infrastructures are very static, involving lots of (often proprietary, expensive) hardware. To evolve and make 5G possible, networks first must become virtualized: make them software-driven so one physical network can contain multiple ‘software’ networks (aka network slices), in support of a services-based architecture.

Network slicing—splitting a physical network into multiple virtualized networks—is crucial to 5G because it allows resources to be shared in an elastic way, and for services to be spun up ‘on the fly’ with resources aligned to their specific requirements. The overarching goal of network slicing is to create a global ecosystem in which network devices and user endpoints are able to communicate together—in support of providing highly reliable, ultra low-latency, high availability services.

With network slicing included as a key principle, virtualization is likely to be applied first in the core network to address the infrastructure resource layer. Slices created must be service-based to provide a partitioned network on demand. Separating the control and user planes will allow service function chaining.

5G is not only about pushing the envelope of performance capabilities and meeting high demand, but also delivering flexible, anything as-a-service/on-demand service model. Such a model can only work if the system in use is flexible, enables fast time to market with new services, and has low total cost of ownership. In short, this is a whole new ecosystem with its own set of business requirements.

2. Reduce OpEx and CapEx

Traditionally, to prepare for future service demand in telecom networks, operators have defaulted to over-provisioning—installing much more hardware/capacity than they need at first, to ‘future proof’ their investment. But, this is costly and it often takes years for that capacity to actually be put into service. Not exactly ideal, but often the best an operator could do given the conditions they were faced with.

But this is a blunt approach, and is no longer sustainable given the complexity, scale, and rapidly evolving nature of next-generation networks and the services they deliver. Instead, virtualization provides operators the opportunity to efficiently leverage multi-purpose, flexible processing power in an elastic fashion. If the compute power is already installed, and has capacity available, addressing growing demand becomes an incremental change.

Using a software-based approach reduces costs both directly and indirectly, by delivering:

  • Service agility, accelerating time-to-market for new services
  • High scalability for efficient resource dimensioning and scalability on an as-needed basis
  • Enhanced user experience, enabling new business opportunities
  • Automated deployment that increases operational efficiency
3. Increase Profitability

Using virtualization to reduce OpEx and CapEx has the advantageous result of increasing profitability—indirectly, through lower expenditures. Given the nature of the telecom market, this is quite important; operators (especially those in established markets) are unlikely to make much headway increasing revenue through acquiring more customers. Likewise, although upselling existing customers with value-added capabilities may have some positive impact on revenue, it is unlikely to be an earth-shattering breakthrough. Running more efficient operations, then, remains key to long-term profitability. Virtualization can get operators there.

For example, virtualization not only minimizes the footprint of hardware deployment needed to support services, but also reduces time-to-market for new services (features can be created and service provisioning can be completed much faster) compared with legacy systems.

With this increase in profitability in mind, many operators are considering some of the following virtualization technologies which they expect will simplify functional deployments and reduce costs through increased agility. (Note that not all of these are mature enough to be deployed as-is, and most require some form of customization.)
  • Virtualized core aggregation, which can be applied to use cases like switching components and regulatory reporting. 
  • Security between physical, service, and NFVI domains, which can be applied to use cases like machine-to-machine access control, firewall provisioning, and unified communication (VoIP/PBX).
  • Access network virtualization for the mobile edge; the first step toward 4.5G and 5G that enables strict, KPI-sensitive mobility services to run closer to the edge. 
  • Service network virtualization (allowing many multi-tenant services and components to run on a single physical network), which can be applied to use cases like data center micro segmentation, on-premises vCPE functions, and big data optimization. 
  • Dynamic interconnects between locations (e.g. data centers and enterprise branches), which can be applied to use cases like bandwidth on-demand, SD-WAN, and dynamic VPN services.
  • NFV to replace physical network functions (appliances, gateways, switches, routers), which can be applied to use cases like SD-WAN, gigabit LAN, and firewall services. 
Accedian’s Role in Virtualization

The development of next-generation telecom networks may seem a bit like the Wild West right now, given the number of organizations and groups involved in standards development, open source, and other aspects of wrangling virtualization into usability. But, ultimately, these networks will be based on standards-compliant technology, and will be built using solutions from multiple vendors. End-to-end visibility into network performance, enabling real-time quality of experience (QoE) optimization, will involve virtual probes used to cope with the changes brought about by ‘softwareization’ and ‘cloudification’ of telecom.

Accedian, in our role as an industry leader for virtualized network performance assurance, is heavily focused on standardization and interoperability. We recognize the importance of communication between orchestrators and other devices, and this is reflected in the solutions we offer.

Our SkyLIGHT performance management—a software-centric solution for continuous performance monitoring of network performance—leverages Accedian’s patent portfolio to deliver highly accurate, granular, and ultra-scalable measurements and analysis.

Some of the components of the SkyLIGHT solution are:
  • SkyLIGHT management, control, and actuation software (SkyLIGHT Director, SkyLIGHT VCX) allows for centralized control of performance monitoring sessions and SkyLIGHT modules and can be installed on non-proprietary, commercial off-the-shelf (COTS) hardware.
  • SkyLIGHT modules (small form factor/SFP FPGA-based units that provide optional hardware assist for up-featuring customer equipment to the latest standards, ultra-accuracy, and advanced features) managed centrally by the VCX virtual machines located in a cloud data center. 
  • FlowMETER and FlowBROKER microservice agents within the SkyLIGHT platform, which provide existing analytics tools with statistics and sub-second user traffic details needed to debug links in real-time. 
Only SkyLIGHT performance monitoring offers:
  • One-way metrics without external synchronization.
  • A fully virtualized monitoring solution with NFV performance monitoring (NFV-PM) capabilities at full network scale.
  • 5G-grade, future-proofed precision.
  • Industry-leading granularity to capture short term impairments (5 sec vs. 15 min reporting).
  • More than 50 measurements and KPIs to provide the widest range of perspectives to detect and isolate elusive performance impairments.
  • Scalability to assure hundreds of millions of subscribers, report billions of metrics a day, deployed within weeks.

Thursday, November 30, 2017

What Makes an Environment Cloud-Native? Virtualization vs. Cloud Computing


The increasing attraction of virtualization and cloud computing technologies has pushed cloud- native environments further into the spotlight. Here, Michael Rezek, VP of Business Development and Strategic Partnerships at Accedian, explores the what, how and why of cloud-native.

What is a cloud-native environment?


MR: It starts with virtualization—the first step to cloud computing, which separates infrastructures through the use of software.

Once infrastructure is virtualized, it provides access to shared pools of configurable resources, such as servers, storage, or applications, which are provisioned with minimal management effort. Only after all component infrastructures in a system are virtualized does the environment truly become “cloud native.”

What characteristics must cloud-native environments possess?


MR: Simply having a virtualized infrastructure or virtualized software application does not equate to being cloud-native. According to the National Institute of Standards and Technology (NSIT), a cloud-native environment should possess all of the following characteristics:

  • On-demand service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service or pay-per-use model

Is ‘cloud-native’ merely a hyped-up concept, or a proven technology?

MR: There is a great deal of hype surrounding cloud computing and virtualization, and coincidentally cloud native environments, with many wondering about their legitimacy as proven technologies.

In some ways, cloud-native is in danger of becoming seemingly unattainable since even the smallest dependence on hardware can disqualify an application from being designated as such. However, it’s important to remember that virtualizing functions is a gradual process and won’t happen in one fell swoop.

Cloud-native infrastructures and cloud-native applications are proven technologies, with the former successfully deployed in data centres globally, while the latter are foundational to software-as-a-service (SaaS) models.

Cloud-native applications—or cloud-native virtual network functions (VNFs)—is where it gets a little complicated. While cloud-native VNFs do exist, their successful transition to a virtual network is still being proven. The challenge lies in the apps (VNFs) not necessarily being centralised in a data centre but instead spread across the network’s distributed “web” with numerous endpoints. This compromises the resource-pooling element of a cloud-native environment due to the sometimes limited pools and clusters of resources at remote points of the network edge, or at aggregation points.

What are some benefits benefits of cloud-native environments?

MR: Three benefits in particular stand out, and they’re all focused on networking and computing happening automatically:
  • Auto-provisioning – in a telecom use case, operators can leverage a cloud environment allowing customers to self-serve their applications without the need for additional resources to deploy new services.
  • Auto-scaling – operators do not have to pre-provision purpose-built networking equipment manually, but can instead use software orchestration to automatically spin up and tear down compute resources, according to customer demand.
  • Auto-redundancy – redundancy can be automated by leveraging pools and clusters of compute resources along with a redundancy policy
What are some challenges with cloud-native environments?

MR: To reap the significant benefits of cloud-native environments, operators must overcome challenges in several key areas:
  • Organizational: Historically, IT designed and managed the compute infrastructure, while network organisations were responsible for designing and operating the network. However, in a cloud-native environment, these two distinct domains must morph into one cohesive unit—leveraging each other’s skills, and cross-training one another to design and operate a holistic virtualized network environment.
  • Financial: Telecom companies have invested billions of dollars in purpose-built networking infrastructure. Migrating to a cloud-based physical infrastructure with software licences will therefore be a costly undertaking. Indeed, the cost may create a barrier for smaller operators wanting to migrate their infrastructures.
  • Network management: successfully managing a software-based network consisting of numerous VNFs is no easy feat. Operators will need to oversee the integration of open APIs to each VNF for management and control. The complexity of working with different standards in an open environment is especially difficult.
How would you sum up the future for cloud-native environments?

MR:
Communication service providers (CSPs) are under mounting pressure to transform their systems and infrastructures to become more agile, able to deliver services at a push of a button. Failure to do this will not only see them struggle to keep up with the competition, but potentially lead to their demise. For today’s telco, the answer lies in the cloud, and specifically in cloud- native environments—which, if implemented correctly, can boost network efficiency, reduce expenditure, and enhance quality of experience (QoE) and quality of service (QoS) for subscribers.



Tuesday, November 28, 2017

Report: Analytics Key to QoE for Complex Wireless Networks


Traditional wireless networks are not especially ‘smart’ or efficient, mostly serving to convey as much data as possible, without regard to importance of the service or app that data is tied to, noted Senza Fili in a recent report on analytics for big data and network complexity. But these networks and the traffic they carry are becoming more complex, so they must also become smarter and more efficient. Such a transformation is possible with analytics. 

“Network architectures continue to evolve, with the addition of Wi-Fi access, small cells and DAS, C-RAN, unlicensed access, carrier aggregation, VoLTE, virtualization, edge computing, network slicing, and eventually 5G. Managing networks that grow in size and complexity becomes difficult because there is a need to integrate new elements and technologies into the existing network in order to benefit from the technological advances,” explained Monica Paolini, founder and president of Senza Fili, and the report’s author in collaboration with RCR Wireless.

The solution is putting predictive analytics to work optimizing these networks, using automation paired with machine learning and artificial intelligence to extract and correlate valuable information from many data sources, generating insightful advice or predictions.

“The value analytics brings to optimization comes from expanding the range of data sources and taking a customer-centric, QoE-based approach to optimizing end-to-end network performance,” Paolini concluded. This give operators the ability to decide “which aspects of the QoE they want to give priority to, and surgically manage resources to do so,” rather than limiting optimization to throughput and selected KPIs like latency or dropped calls.

Focus on QoE
That ability to fine-tune traffic management is very valuable to operators, who are necessarily shifting to a quality of experience (QoE)-based model despite supply exceeding capacity in an environment with limited resources.

While operators may not be able to realistically give all users everything they want, Paolini said, they can still greatly improve the user experience using resources available, in a way that’s more fair and better aligned with what subscribers value most--for example, the quality of video calls taking higher priority than the ability to watch videos on YouTube and Netflix.

“Lowering latency across the board may be less effective in raising QoE than lowering it specifically for the applications that require low latency,” she explained. “The average latency may be the same in both cases, but the impact on QoE is different.”

Other advantages of this approach include the ability to:
  • Avoid over-provisioning parts of the network
  • Decide which KPIs carry the most weight for improving QoE
  • Determine the best way to allocate virtualized resources
  • Find root causes of network anomalies that result in QoE issues
  • Manage security threats
Toward Predictive Analytics
“The ultimate goal of analytics is to become able to predict the imminent emergence of an issue before it causes any disruption to the network,” Paolini stressed. Machine learning and artificial intelligence will make that possible, eventually.

For now, fitting analytics to each operator’s specific requirements involves making tradeoffs, most notably involving time (for how long, and at what time increments, data is collected) and depth (how macro or micro the data is).

“As operators move toward real time and closer to the subscriber, the volume of data that analytics tools have to crunch grows quickly, increasing the processing requirements, and hence the effort and cost,” Paolini pointed out. “But the reward is a more effective optimization.”

It’s more effective because it’s more targeted.

“Congestion or performance/coverage issues are likely to emerge at different places and times, but only in a small portion of the network…” and therefore “optimization has to selectively target these locations and not the entire network. And the lower the time resolution and the more precise the geolocation information, the more powerful the optimization can be,” Paolini concluded.

Adopting Analytics - Drivers
Operators are driven by several factors to adopt customer experience-focused analytics:
  • Cost and Services - Subscribers are more demanding and less willing to spend more. 
  • Usage - Subscribers use wireless networks more, and in new ways, resulting in a richer set of requirements.
  • Technology - 4G now and 5G in future benefit from more extensive and intensive use of analytics. 
A Cultural Shift
For operators, expanding the use of analytics is appealing but not without its challenges. The greatest of those “is likely to come from the cultural shift that analytics requires within the organization,” Paolini said in the report. “The combination of real-time operations and automation within an expanded analytics framework causes a loss of direct control over the network – the type of control that operators still have by manually optimizing the network. Giving up that level of control is necessary because the complexity of networks makes automation unavoidable.”

Yet, still, operators are increasingly committing to analytics because the benefits outweigh the challenges, enabling them to:

Improve support for existing services
  • Create new services
  • Customize service offerings
  • Optimize QoE for specific services and applications
  • Understand better what subscribers do, individually and within market segments
  • Implement network utilization and service management strategies that set them apart from competitors
Put another way, end-to-end network efficiency and service provisioning enabled by analytics result in significant financial benefits for an operator, by delivering:
  • Increased utilization of network resources
  • Lower per-valuable-bit cost
  • Lower operational costs
  • Better planning
  • Network slicing and edge computing
  • Better customer service and product offerings
  • Third-party revenues

Tuesday, November 7, 2017

Active Synthetic Network Monitoring: What It is and Why It Matters



When it comes to tracking and optimizing the performance of wireless networks and the services they support, what’s more important: passive monitoring or active (synthetic) monitoring? The short answer is that both play a role. However, given the increasing complexity of modern broadband wireless networks, and the direction in which they are evolving, it’s fair to say that active monitoring plays a more and more important role. As such, it’s important to understand how it compares to and complements passive monitoring, and why it matters.

Active monitoring simulates the network behavior of end-users and applications, monitoring this activity at regular intervals — as fast as thousands of times a second, if required — to determine metrics like availability or response time. It is a precise, targeted tool for performing real-time troubleshooting and optimization.

By contrast, passive monitoring analyzes existing traffic over time and reports on the results. It is best for predictive analysis using large volumes of data, identifying bandwidth abusers, setting traffic and bandwidth usage baselines, and long-term traffic analysis to mitigate security threats.

Why Active?

Cloudification, encryption, decentralization and SD-WAN have fractured the value of traditional, passive monitoring. As a result, blind spots are consuming both networks and services, with service providers losing sight of the majority of traffic.

Where visibility still exists, it’s insufficient. Compute and hardware-intensive passive solutions are slow to report, typically taking well over a minute to digest and produce metrics. But, today’s dominant traffic flows—software as-a-service (SaaS), web, social media, streaming media, and real-time communications—are dependent on significant volumes of transient sessions between servers and clients, cloud and apps.

Consider, for example, that 90% of all TCP sessions last less than 3 seconds, and consume less than 100 bytes each. It’s not surprising, then, that the majority of network downtime is from short term degradation, not sustained outages. Passively monitoring aggregate traffic, reported every few minutes, miss the vast majority of short term events that impact the services subscribers use the most.

Active monitoring takes a proactive approach, overcoming these visibility gaps, while delivering the enhanced precision, and the sub-second insight required to monitor and assure dynamic services. Active-synthetic monitoring is a lightweight, ubiquitous and standards-based approach that faithfully replicates application and network traffic with unrivaled precision and frequency. This creates a constant, controlled source of detailed metrics that accurately characterize quality of service and experience (QoS, QoE).

Active Monitoring Uses

Beyond its broad use as a controlled, targeted, QoE optimization tool, active monitoring is valuable for:


  • Introducing new services. VoLTE, Internet of Things (IoT), SaaS, over the top (OTT) and other digital services can be simulated and monitored, throughout their service lifecycle. Active test allows service providers to assess network readiness before deployment, and the impact new services have on other applications when they go live and begin to consume the network. 
  • Applying the benefits of virtualization to network and service assurance. Active monitoring is easily virtualized. When surveyed by Heavy Reading, service providers overwhelmingly pointed to active testing, and virtualized agents, as driving their quality assurance efforts and budgets. Passive probe appliances were the most likely to lose budget, after years of consuming significant capital expenditure. Most passive solutions are nearly 500% more expensive than the active solutions enabled by virtualization.
  • Enabling automated, software-defined networking (SDN) control. Active monitoring provides a complete, high definition view of end-to-end performance that service providers can use as real-time feedback for automated control, and with machine learning and artificial intelligence (AI) for root cause analysis, predictive trending, and business and customer journey analytics. Exceptionally granular, precise data with a wide diversity of statistical perspectives means analytics can converge and correlate multi-dimensional events an order of magnitude faster than coarse, passive monitoring data permits.
Breaking this down further, the advantages of active monitoring include:

  • Massive, multinational network monitoring scalability on lightweight, virtualized compute resources
  • Carrier-grade precision that enables undisputed service level agreement (SLA) reporting, independent of traffic load or location
  • Ability to resolve detailed, one-way measurements to microsecond precision
  • Ability to measure performance, QoS, and QoE at any location, physical or virtual, end-to-end
  • Tests can be targeted at known issues, locations or services on demand to accelerate troubleshooting
  • Streaming metrics tailored to machine learning, analytics, artificial intelligence (AI), automated SDN control and management and network orchestration (MANO)
  • Ability to proactively and predictively test services before they come online: VoLTE, IoT, business services, SaaS, and impact of OTT
  • Fully standards-based, and interoperable over multi-vendor, multi-domain infrastructure
  • Eliminates the need for taps, packet brokers or “SPAN” ports
  • Segments networks and services to allow rapid fault isolation, precise trending, and predictive forecasting
  • Proactive, in contrast to passive monitoring which is always “after the fact”
  • The ability to baseline and threshold using reliable and consistent data
  • Predictive mechanism to facilitate network improvements/adjustments based on subtle changes/symptoms vs. customer complaints
Passive monitoring still plays a role in managing and optimizing wireless networks, and always will. But, the complex nature of these networks today and tomorrow also demands the use of active monitoring for real-time, proactive, automated QoE optimization. Don’t leave home without it!

Tuesday, October 31, 2017

Accedian at MEF17: Technology Innovation Panel Speaker, Booth


Join us at MEF17 (going on November 13-16, 2017, in Orlando, Florida); our President and CEO, Patrick Ostiguy, is speaking on the November 14, 12:05pm Technology Innovation Panel. We're also exhibiting at booth 205. Here are the details!




Event: MEF17

Event Dates: November 13-16, 2017

Booth: 205

Panel Date: November 14, 2017

Location: Marina Bay Sands, Singapore

Panel: Technology Innovation Panel

Panel Time: 12:05pm

Panelist: Patrick Ostiguy, President & CEO

register.png     meetus.png



Monday, October 30, 2017

Real-time Network Monitoring: What Progress Have We Made Globally?



According to GSMA Intelligence, the number of mobile subscribers in the Asia Pacific region will rise from 2.7 billion at the end of 2016 to 3.1 billion by 2020. Countries driving this rapid growth are China and India.

This rapid increase in the number of mobile subscribers stresses the need for real-time network monitoring. As today’s mobile network operators (MNOs) increasingly rely on performance to differentiate themselves in an already crowded market, quality of service (QoS) and quality of experience (QoE) have never been so important.

Efficient LTE networks will become a major priority for mobile operators everywhere. Managing complex, high speed, and multipurpose networks is not the only challenge; operators must also prepare for new services such as connected cars and other Internet of Things (IoT) features. At the same time, operators also need to develop and put in place methods and practices that will support their imminent move to 4.5G, and ultimately 5G.

Faced with the task of navigating this complex landscape, it’s essential that mobile operators have a greater level of network visibility to help with management and troubleshooting. Accedian's solutions are helping mobile operators across the globe keep up with the growing demands of their networks. Here are a few examples of how we do it:


  • South Korea’s SK Telecom turned to our SkyLIGHT Performance Platform and Nano smart SFP Modules for a standards-based, network performance assurance solution capable of providing end-to-end QoS and QoE visibility across a sophisticated, multi-vendor mobile network. This enabled SK Telecom to drive its software-defined network (SDN) management system directly with the real-time metrics from our solution, automating network configuration and optimization to deliver the best possible experience to its customers.
  • India, one of the main drivers behind the rapid growth of mobile subscribers in Asia, is in position to capture the benefits of reliable virtualized real-time network assurance. Our SkyLIGHT Platform and Reliance's data analytics Jio Coverage Platform were used together to optimize network quality and user experience in real-time for approximately 100 million mobile subscribers in India. Reliance Jio used our SkyLIGHT solution to provide near complete visibility of its network. This resulted in being one of the best practice examples of providing total network visibility, with zero hardware dependency, around the globe.
  • Boasting a customer base of more than 341 million subscribers, Telef√≥nica’s requirements for a comprehensive performance assurance solution had to cover its global footprint and looked at ubiquitous coverage to localize issues, planning network upgrades, and optimizing performance, among many others. Real-time metrics covering network QoS, as well as voice and video and QoE, for trending, alerts and reporting had to be considered in order to address its highly asymmetrical network. Additionally, fast and easy instrumentation deployment with a virtualized, centralized control platform was also incorporated, and in the end, our solution, comprised of SkyLIGHT performance assurance platform, Nano smart SFP Modules, 1GbE and 10GbE Network Performance Elements, delivered the best possible QoS and reliability for Telef√≥nica’s customers.
In North America and Europe, where LTE is more established, operators are already moving toward virtualization generally--including virtualized instrumentation, which is less expensive to deploy and use than traditional, hardware-based methods.

Areas like Asia and Latin America are investing much more in performance monitoring and assurance for greenfield LTE deployments. However, in most of the developed and western world, 3G and earlier technologies continue to play a key role and so the need to solve significant and basic issues, like large-scale outages, is still present. The reality is that Asian and Latin American operators often have higher performance networks than their counterparts in Europe.

With the market becoming more digitized, active (synthetic) monitoring is starting to show its value. Active tests allow service providers to assess network readiness before deployment, avoiding the impact of new services on existing applications when they go live and begin to consume the network.

Therefore, by providing a comprehensive view of end-to-end performance, service providers can use this detailed “network state” as real-time feedback to automate SDN control, and use machine learning and artificial intelligence (AI) for root cause analysis, predictive trending, and business and customer journey analytics. Exceptionally granular, precise data with a wide diversity of statistical perspectives means analytics can converge and correlate multi-dimensional events an order of magnitude faster than coarse, passive monitoring data permits.





Tuesday, October 24, 2017

6 Truths About QoE Monitoring for Virtualized Networks


A third of mobile networks are not fully monitored end-to-end. Let that sink in for a minute.

It’s a surprising figure because most operators use three or more vendors for each major function in their radio, backhaul, and core networks. Being able to piece together a consistent monitoring layer end-to-end is necessary to manage quality of experience (QoE) over these discontinuous domains.


A lack of end-to-end visibility isn’t sustainable. Especially since network performance is rapidly becoming the key differentiator. As operators seek out and implement ways close their visibility gap, they are discovering the following six truths about managing QoE over next-generation networks.

1. Best-effort QoE assurance is no longer good enough


While best-effort QoE assurance has been the accepted standard for typical internet applications and services, that’s rapidly changing with the on-demand nature of usage today. Customers are no longer tolerant of services being “okay” rather than “excellent.” Even a bad experience with Netflix or YouTube can impact customers’ perception of network quality and lead to churn.

This truth translates into huge changes in terms of what’s required for real-time service instantiation, as well as service monitoring and assurance. Operators must now be able to perform per-application level quality assurance for a vast number of services and QoE requirements, as well as meet the need for automated processes in virtualized networks.

2. QoE management for virtualized networks is a whole new ballgame


The shift from traditional to virtualization networking significantly impacts how operators manage QoE. Traditional networking was, for the most part, static; thus, understanding the impact of Layer 2 and 3 issues on QoE was relatively straightforward. Virtualized networks, on the other hand, are very dynamic; thus, QoE optimization must be driven in real-time by emerging technologies like machine learning. Also, in such networks, the data plane is often split across various network slices (even for the same service), making it even more complex to understand service delivery.

Specifications for backhaul network metrics—such as throughput, latency, and availability—are becoming more stringent. As a result, traditional monitoring techniques may not be sufficient to see the types of performance issues that result in QoE problems.

In Accedian’s experience, we have seen cases like the one below where an operator was experiencing more than 100,000 call drops, with an obvious negative impact on QoE, even though classic quality of service (QoS) metrics (e.g. packet loss, delay) were well within spec. Only by analyzing millions of Layer 2 and 3 KPIs was a periodic relationship observed that led to discovery of the root cause. Without the ability to manage QoE in this way, the impact of such a situation would have been catastrophic to the operator. 

<click image to enlarge>

Source: Accedian slide from Heavy Reading webinar, “

Given the dynamic nature of virtualized networks, and the inability of traditional tools to assure virtualized services, it’s hardly surprising that operators are less confident in their ability to monitor QoE or the service experience.

3. Understanding the relationship between QoS and QoE is critical

Operators are still more comfortable monitoring QoS than QoE, a holdover from traditional telephony performance monitoring. The problem is, customer satisfaction is largely driven by QoE and not QoS. Being unable to monitor QoE puts operators’ business models at risk. And, as customers come to rely even more heavily on mobile connectivity as a way of life, QoE will become the competitive differentiator.

This is not to say that QoS and QoE are unrelated. Our work with South Korean operator SK Telecom, helping them develop a performance assurance strategy for their LTE-A network, highlighted some interesting relationships between QoS and QoE metrics. For example, it was found that even a small packet loss event—say, 0.1%—could lead to a 5% decrease in service throughput and a 2% loss could lead to as much as an 80% decrease in throughput. 

<click image to enlarge>


This type of surprising relationship is not limited to obvious issues like packet loss, either. In another example, it was shown that a 15ms latency increase on a critical path in the network could cause a throughput decrease as much as 50%.

In both cases, the biggest impact is seen in the application layer protocols managing service delivery, not the QoS monitoring. Monitoring only the resource and network layer is not enough.

Which leads to another inconvenient truth: with next-generation networks, there are simply too many services streaming through tens of thousands of network components to manage QoE manually; automation is required.

4. Next-gen performance assurance is beyond human control

Generating QoS metrics in a traditional network involves large, centralized test suites and network interface devices (NIDs) at the edge. It also requires a lot of engineering, planning, and provisioning work. Imagine that level of activity for a new network with 100,000+ eNodeB sites and 100,000+ aggregation and cell site routers. Clearly, the traditional networking approach won’t scale.

While operators are rolling out software-controlled, programmable networks—using intelligence to govern resource allocation as applications come online—they are still largely managing those networks in a way that’s unsustainably manual. The next step is to do all this in an automated way, using intelligent algorithms for a real-time view of network topology paths.

Automation enables operators to:
  • Relieve hotspots and avoid unnecessary upgrades, by balancing link utilization 
  • Ensure bandwidth is available when it’s needed, for each application
  • Satisfy customer bandwidth requests without negatively impacting other services
  • Determine the best links to place each new workload
5. Active, virtualized probes are the future of QoE assurance

Implementing software-defined networking (SDN) is a priority for mobile operators, and just about every operator now has a plan to virtualize at least part of their network—starting with the mobile core.

The reason for this trend is that existing network architecture is failing to enable the type of innovation operators need to make the foundation of their business strategies, especially as they roll out LTE-A networks and plan for 5G. They find it hard to create new revenue streams, because with proprietary hardware, management, and IT systems it’s difficult—and takes too long—to deploy new services.

But, this shift does require a new approach to QoE management.

In an effectively-managed virtualized network, the test suite becomes a virtual network function (VNF) that can be instantiated as needed, either to address a large geographic area or to scale compute power required for a large number of endpoints.

This setup does still require an endpoint to test toward, but that’s relatively straightforward given that most routers and base stations are now compliant with standardized test protocols such as Two-Way Active Measurement Protocol (TWAMP, RFC-5357). Sometimes there will be reasons to deploy purpose-built endpoint solutions (e.g. smart SFPs) when a common test standards are not supported, but in most networks this represents less than 10% of sites. The increasing popularity of x86 towards the mobile edge means that standards-based VNFs can be used as well.

What makes VNF-based performance monitoring even more compelling is the fact that either part of the equipment needed for service delivery is a software solution that can be easily orchestrated in a virtualized network; a QoE solution for a network of 100,000 eNodeBs can be deployed, up and generating KPIs in weeks, rather than several months, with very little capital required.

For all the reasons discussed so far, and given the economics of virtualized networks, mobile operators are moving away from centralized probe appliances and toward active virtualized probes. 

<click image to enlarge>


Source: Accedian slide from Heavy Reading webinar, 

6. Effective QoE management is possible, with analytics and automation

What is the best use for the potentially billions of KPIs generated daily by end-to-end network monitoring using active, virtualized probes? Big data analytics makes light work of this problem, allowing correlations between layers and events—through predictive trending and root cause determination—to be automated, and displayed in real-time.

Since the next generation of mobile networks requires automation to run optimally, having centralized network performance and QoE metrics in the operator’s big data infrastructure provides the necessary feedback for SDN control to make network configuration decisions that deliver the best experience to each customer, at any time.

That’s the ultimate goal for LTE-A and 5G: breathtaking performance over a highly dynamic, virtualized network, without humans getting in the way.