• For Specialists

    A blog for service providers focused on QoS, QoE, and network performance. Join us for in-depth analysis of industry news, technology, and solutions driving performance in mobile networks, cable MSO business services, cloud and data center connectivity, enterprise WANs, and financial networks.

  • Join us Live

    We host webinars each month covering topics from solution design to performance assurance technology and demos of our latest innovations. Join us engineers online for tailored insight and Q&A with our network engineers.
    Upcoming Webinars:

    Click Here
  • Learn on YouTube

    Accedian is the Performance Assurance Specialist for mobile networks, enterprise to data center connectivity, and service provider SDN. With dozens of videos covering network performance and QoE, our YouTube channel is a unique training resource.

    Watch Now

Wednesday, November 25, 2009

Active, invisible

Enterprises and mobile operators are gaining Ethernet experience quickly as they speed deployment of the latest generation of business services and wireless backhaul networks with the latest technology. Nothing like jumping right in to learn the oddities and pain points of technology hitting prime time. We’ve seen operators shift their focus from the nuts and bolts to concern for QoS management and monitoring, with an emerging need to start reporting performance online to their customers (read post).

It’s at this point that so many are asking the tough question: “With hundreds or thousands of Ethernet circuits and services, how can I find the needle in the haystack when things go wrong?” Monitoring and test isn’t new to Ethernet, but large-scale service footprints arguably are. The truck-roll and bootstrap methods that keep things running in small metro deployments start to fray at the edges in 20-40% growth markets in the mass adoption phase.
These operators are starting to discover the Ethernet Operations, Administration and Maintenance standards (OAM) for Connectivity Fault Management (CFM) and Performance Monitoring (PM) that have been quietly integrated into network elements over the last few years under the IEEE 802.1ag and ITU-T Y.1731 standards – as they look for ways to keep tabs on latency and jitter, packet loss and availability to support SLA reporting and Quality of Service (QoS) management. These standards and other active testing techniques like the Performance Assurance Agent (PAA), bring full visibility into the performance of layer 2 & 3 services.
Active techniques conduct their measurements by transmitting a sparse but regular stream of precisely time-stamped “tracer” packets within the service under test (in-band). The test packets’ headers mimic those of the application or SLA of interest (e.g. by specifying VLAN, CoS / DSCP, protocol, drop eligibility, etc.), to ensure they follow the same path, experience the same delay, and are given the same priority as the monitored service. The test packets’ role is to provide a recurring, known reference from which SLA / QoS metrics can be measured, without having a noticeable affect on the service itself (i.e. non-service affecting, non-intrusive).
Rest assured, active testing is safe, friendly and virtually invisible to the end-user, as bandwidth consumed is fractional compared to the bandwidth of the service itself. But how negligible is it? Just noting it’s very small doesn’t satisfy anyone in engineering, so to quantify the impact I decided to run through the calculations myself.
Taking Y.1731 delay measurement and continuity tests as a standard case (from which latency, jitter, frame loss ratio and availability can all be calculated), the math works something like this:
  • 3 frames are required for each measurement instance: one Continuity Check Message (CCM) frame, a Delay Measurement Message (DMM) and its reply (DMR);
  • Each packet has a size ~200 bytes;
  • Worst case you’d run these tests with a per-second frequency, for a number of unique flows or SLAs concurrently – let’s take 3 as a typical number for a cell site or Enterprise running 3 unique service classes (real-time, “important”, and best-effort traffic categories).
This would lead to 3 frames x 200 bytes x 3 instances every second, or 1.8 Kbps. Assuming a GbE link, this amounts to less than 2/1,000ths of a percent of the link capacity (!) Even in a case where you’re running 100 concurrent OAM sessions from, say, a multicast host or mobile switching center (MSC), you’re talking about less than 0.006% of total bandwidth. Pretty small, enough to consider it negligible, especially in light of all the information it’s giving you in return. If you asked a customer if you could use 0.0002% of their link to ensure they get the best possible performance, I don’t think they’d think about it too long! We’re all used to overhead, whether it’s in the form of taxes or highway tolls – but in this case you can pretty much assume it just isn’t there, which is a relief to operators with Ethernet to deliver. Learn more about active testing’s impact with complete calculations for both OAM and PAA here.

Wednesday, September 30, 2009

Who is Mzima?

The future of telecom comes from stories like these: Mzima (pronounced M-zeema), an operator few of us know by name, now serving over 40 points of presence throughout North America, Western Europe and Asia with an all-packet transport network. In the first six years since their inception in 2002 they grew under the radar, with almost no sales or marketing to speak of. Despite this, they’ve signed on an impressive roster of customers – how’s eHarmony, and Facebook for web 2.0 heavyweights?

What makes Mzima a preferred choice over today’s wholesale giants? It begins with the network. While still virtually unknown, the engineering team was building out with the latest technology. Today they operate a pure Ethernet network based on PBB-T that provides tangible benefits over MPLS: absolute transport transparency, low overhead, cost-efficient equipment. With a pure layer 2 network, customers’ VLANs, MPLS tags, service classes and any IP applications run as if on a LAN – and when you’re dealing with content providers as key customers, simple interconnectivity that fits in with their model of distributed data centers, web-server farms and transactional processing can win you the business.
Mzima is a Swahili word that means stream, or alive, which was their first intention: to become a content delivery network for streaming video and all things media. But once they got started, they quickly realized there was a far greater need: simple, high-performance Ethernet wholesale connectivity. Over time Mzima has established settlement-free peering with major MSOs, ILECs and CLECs, and has created a reputation for highly reliable, ultra-low latency networking services just as likely to be used by financial traders as online retailers. If settlement-free peering doesn’t make traditional providers think profitability, they might need to think again. Single customers on Mzima’s network pay upwards of $1.5MUSD a month to interconnect their sites.
Being all-packet day-one has had its headaches as any early adopter will attest, but the end result is a network that showcases all the benefits of Carrier Ethernet. Where some providers struggle with simple services like burstable bandwidth – since difficulty collecting usage stats makes billing impossible – Mzima’s uniform network is easy to maintain, and allows them to not only offer burst services, but even time-of-day bandwidth profiles for committed and excess information rates that fit the needs of content and location-based services. Imagine doing this with a TDM-based core? Of course it can be done, but at the price of bandwidth efficiency and operational complexity.
Going all-packet has its advantages, but doesn’t mean Mzima doesn’t face challenges. Ironically, the same Ethernet-loving customers that helped them expand would love to see TDM connectivity as well. Serving off-net customers with less capable peering partners can limit flexibility. And many customers need to be taught the value a pure-packet network affords. When you’re apples to oranges education is key, especially when your focus is more performance than price.
It’s certain that Mzima’s new evangelization phase will help to build awareness of the benefits of Ethernet transport. It’s also certain that they’re sure to attract competition, as today’s legacy operators build out native Ethernet to capture business from the largest emerging data users of our time. But I have to think that Mzima will hold its own. In a way they speak the language of a new breed of customer, a language traditional carriers might have yet to learn.

Monday, August 24, 2009

NOC needs plug & go Ethernet

Everybody’s doing it: Ethernet is getting deployed on a large scale everywhere. I’ve had the chance to meet with NOC staff at several service providers recently, ranging from regional operators, to utilities, MSOs and multinational carriers. Whether for business services, wholesale Ethernet or wireless backhaul there’s a common focus: move from regional and one-off offerings to large-scale, full-footprint Ethernet deployments. We’re talking hundreds of endpoints instead of just a few, and it’s starting to take its toll on operations.

Invariably, the pain is the same for operators large and small – having moved far beyond testing and trusting the technology itself, the ability to rapidly scale Ethernet service offerings without excessive manual effort is front and center. Caution: what I’ve heard might make you choke on your coffee. We’re talking 40% success in services commissioning, mis-configured switches that merge management traffic with customer data, and full-fledged security breaches caused by mismatched VLANs. Oh, and the time Ethernet OAM went wild on an aggregation node, and took down hundreds of cell sites. And the New York, city-wide outage for a major operator, simply because standard operating procedures were overlooked.
I was sensing a trend (or maybe it was really hard to miss), so to get a bigger sampling I setup a survey on the EtherNEWS blog, and operators were quick to speak up.
Nearly 90% of respondents say Ethernet deployment automation is important or very important. Service providers are scrambling for a way to simplify the mechanics of getting E-Line and ELAN services up and running in a reliable, repeatable way. Over half say ensuring error free deployment is their biggest concern, followed closely by the need to configure QoS and validate that service performance is up to SLA specs. Interestingly, the cost and time required, and finding and training staff, rank as background issues. How can that be? I imagine it’s because if you get automation working, you can do much more with less staff, and training, cost and time drop out of the equation.
So quality and consistency is driving the need for a Plug & Play equivalent for Ethernet services – more accurately Plug & Go, or Plug & Run, since everyone’s tired of playing around with their Ethernet gear late into the overtime hours.
Are there any efforts emerging to standardize a quick, easy way to get Ethernet up? The closest parallel is probably the CableLabs DOCSIS cable modem self-registration standard, a key reason why cable operators were able to deploy home phone service and high-speed internet at the expense, largely with staff that had little experience with either. So is the MEF, the IETF or the IEEE up to something? Haven’t heard a whisper – but you can be sure that if the NOC folk have their say, they’ll be making a lot of noise very soon – just as soon as the fires are out and they see the light of day again.
Accedian Networks’ Plug & Go instant provisioning feature was inspired by theses needs in the NOC. Learn all about this amazing technology by watching this short video.

Monday, August 17, 2009

The 3D, 4G mesh

Just in time to join the big summer sci-fi blockbusters is a bigger-than life techno-drama for mobile operators: the 3D, 4G Mesh. Unfortunately it’s not entertainment, not even mildly entertaining: tackling sticky QoS issues is a serious dilemma for providers rolling out WiMAX & LTE backhaul. In a previous post I outlined how the move to intelligent, self-organizing networks (SONs) has created unprecedented performance challenges for 4G mobile backhaul. Towers communicating directly with each other to coordinate roaming hand-offs, deliver and optimize user traffic has created an adaptive mesh-based network where the intelligence has been delegated to “empowered towers”.


However operators choose to connect their cell sites together, whether through a direct mesh or traditional hub-and-spoke design, it’s the tower-to-tower latency, jitter, packet-loss and prioritization that counts as users roam between cells while watching District 9. From the user-experience perspective, the network is a mesh regardless of how the data gets moved around. And this is where the mind-bending fun begins.
Enter the 3rd Dimension
The word exponential is not common in backhaul networking. We’re much more comfortable thinking about tidy point-to-point circuits, or even 2D “clouds” with data in, data out. But packet-based applications have gone beyond this to the third dimension: quality of service tiers (service classes) stack up on the network. Priority traffic associated with real-time applications like VoIP and video are latency and jitter sensitive, and need special handling so calls don’t go robotic. And control-plane traffic is just as critical as we roam on the highway and our conversations jump tower-to-tower within milliseconds. Stack up to 8 classes of service on the mesh interconnectivity of 4G backhaul and you’ve got a really interesting mess – in fact an exponential mesh mess.
To illustrate, this simple diagram shows only 4 towers and a Mobile Switching Center, connected through an Enhanced Packet Core (EPC) to PSTN and Internet gateways. The most basic configuration would be 3 classes of service between each site (control plane, real-time applications & best effort). The result? 54 unique service flows to maintain (27 flows in each direction). Now take a more realistic scenario: 100 towers talking to each other while homing to an MSC, and 5 classes of service. The damage? 49,510 unique flows (I’ll let you verify the math)!
In these 49,510 flows, at least 40% (19,804) will be high-priority streams that are particularly QoS sensitive. They’ll need to be monitored for latency and jitter, packet loss, throughput and availability in real-time. Not monitoring is not an option: if something went wrong, how would you even know where to start troubleshooting when you’ve got almost 20,000 flows to sift through? And the other 30,000 or so? They also need to be monitored, at the very least for packet loss and continuity – because you want to know if the whole pipe went down or just one service.
So you’re the operations guy (who definitely is watching a different kind of widescreen content in the NOC). Where do you start? The approach most operators are using clones the mesh itself with a service assurance overlay. Network Interface Devices (NIDs) capable of monitoring up to 100 flows each in a full-mesh setup are installed at each cell site and the MSC. Automation gets them all talking and watching each flow, and a centralized monitoring system crunches mountains of per-second data, boiling it off into a dashboard view that makes sense of this 3D, 4G world.
Sometimes it’s interesting to know what’s happening behind the scenes: the making of one of the most amazing networking stories of our time.

Tuesday, August 4, 2009

LTE backhaul: Think twice

When you start digging into LTE, you find it’s a pretty amazing technology – not just the speeds and feeds, but the way it was thought out from the bottom up. With technology migration always a painful problem for operators, LTE was designed to simplify deployment, maintenance and reduce operating costs with the concept of Self Organizing Networks (SONs) running over a flat IP infrastructure. Base stations are much more sophisticated than in 3G and other wireless models: they are responsible for managing their radios, optimizing service quality, discovering neighboring cells, and connecting themselves to the backhaul network.

But perhaps the most important change in LTE base stations (or “evolved Node Bs”as they are known), is their responsibility for managing the service itself. Where 2G and 3G networks rely on centralized radio network and base station controllers (RNC/BNC), LTE goes without: each tower communicates with its nearest peers to hand-off users as they roam from cell to cell. Both control plane (roaming and call control) and user data traffic pass directly between towers, connected in a mesh-style backhaul network. This distributed networking and intelligence can reduce latency and free core capacity by sending data directly to its destination without passing through a centralized aggregation point.
Sounds wonderful – a clear advance in mobile mechanics that takes full advantage of the advanced routing capabilities of today’s MPLS infrastructure – but like so many things, great ideas quickly run into roadblocks where the rubber hits the road. Here’s the tricky part: with LTE’s rates planned to ramp to 150Mbps per-user (!), the backhaul network has to be future-proofed day one. This means a lot of fiber to towers that are mainly fed today by a bundle of T1s over copper. In 3G this problem opened a whole new market: alternative access vendors (AAVs) such as cable MSOs, fiber-rich CLECs, pure-play backhaul providers and even utilities stepped up to fill the gap. Wholesaling backhaul is the name of the game in 3G, where the fastest deployments are ramping on the networks of others.
But this scheme doesn’t fit so well into LTE’s full-mesh architecture. Backhaul is traditionally provided over point-to-point links: AAVs deliver Ethernet in, Ethernet out, logically connecting each tower to a centralized switching center. The concept of tower-to-tower communication is beyond their domain, and their control. I’ve never heard of wholesale MPLS backhaul. Imagine the complexity getting everything talking? If it sounds like a major headache, it is, and no amount of “self-organization” will help.
So operators rolling out LTE have a difficult choice: go it alone with their own MPLS network (if they have one), or lease backhaul service based on point-to-point Ethernet. Towers can still talk to each another, but all the traffic that would just hop to the next cell now has to loop through the switching center just like in the good old days.
Twice the Trouble
Problem is, this has a serious impact on latency as the data path stretches out over a much longer distance. This added delay, combined with decentralized roaming control managed by the base stations spells out dropped and choppy calls… unless the AAVs deliver super-low latency. It’s hard enough delivering Ethernet backhaul with the tight performance demands of 3G, where tower to switching center latency needs to be in the single-digits of milliseconds. Bad news for AAVs: SLAs for LTE will cut this spec in half. Since control plane traffic has to pass from one cell to another, it effectively doubles the path length of what used to be centralized commands sent directly to the towers. So packets have to get there twice as fast. This isn’t a bandwidth issue – increasing capacity won’t do much for latency. This is more like a speed of light, switching performance and network optimization issue.
So what’s a cellco to do? Early LTE deployments precisely echo the backhaul dilemma; only the largest operators with significant MPLS footprint are in the game, and outsourced backhaul will only come into play where their own network can’t reach… and when it does, it’s a sure bet it’ll come with some of the tightest SLAs telecom has ever seen. The AAVs that can rise to the challenge are sure to win big, because there won’t be too many stepping up to the plate.

Wednesday, July 8, 2009

Least-cost Ethernet?

The CBX event in New York last week harked back to the heydays of telecom, while at the same time pointing out the future – there was a buzz in the air as the busy show floor turned into a dynamic meeting place between service providers large and small, working out new wholesaling arrangements to gain market access and deliver new services. That part wasn’t new, the Telx-sponsored event has always been about making deals. But two new trends rang loud in conversations and the conference: the arrival of Ethernet wholesale, and the need for ultra-low latency circuits. Of course they’re related: as critical applications demand ever more bandwidth, they are driving the need for cost-efficient Ethernet connectivity, but not at the expense of performance.

In a way, this situation underpins the need for Quality of Service (QoS) service level agreements (SLAs). Justifiably, SLAs are even more important with a non-deterministic technology like Ethernet than in TDM days past. But conversations with leading operators hinted that SLAs alone might not be enough to secure Ethernet wholesale contracts. Carriers are used to having access to a variety of routes with a range of quality guarantees, switching between them dynamically using least-cost routing schemes. There was certainly speculation that similar systems may soon be needed in wholesale Ethernet.
It’s not without precedent. Least cost routing followed a similar progression in the voice world. Circuit-switched voice was the original application driving the need for routing platforms, sophisticated systems that compare the current tariffs offered by dozens of carriers in real-time, dynamically sending traffic over the lowest-cost routes as capacity permits. When VoIP arrived, a new wrinkle was introduced. Cost and capacity were no longer the only variables: toll-quality QoS was no longer a given. Different encoding and compression schemes impacted calls, so did changes in network traffic resulting in congestion-based packet loss. So least-cost routing systems evolved to factor in QoS as a new variable – test heads now perform thousands of test calls every hour over available routes, allowing carriers to match the lowest-cost route meeting a pre-defined quality to particular customers or applications (e.g. business vs. residential or cellular calls).
There’s a similar interest building in quality-aware least-cost Ethernet routing. Imagine feeding bandwidth, jitter, latency, packet-loss and availability criteria into a routing system that would automatically select the lowest-cost transport alternative. Instead of SLAs simply written on paper and occasionally verified, dynamic performance criteria would allow a variety of technologies and routes to serve specific applications. Imagine routing your financial, transactional, VoIP and video traffic over different, premium routes, while lower-priority email, Internet, remote storage and other applications shared a lower-cost pipe. Imagine this all being dynamically routed in real-time based on various carriers’ performance and capacity. The cost savings (and financial rewards) would certainly be non-negligible to those who could pull it off.
So how would you go about testing and verifying Ethernet link QoS on a real-time basis? Luckily the answer is already written in the standards: the Ethernet Operations, Administration and Maintenance (OAM) specs to be precise. The ITU-T Y.1731 performance monitoring standards specify one-way and round-trip delay and jitter measurements, packet-loss ratio and the ability to perform loopback-based throughput tests. Complementary connectivity fault management (CFM) OAM specs in Y.1731 and the IEEE 802.1ag provide continuity and availability data. So within OAM, all the QoS and SLA monitoring measurements are already available – just waiting to be fed into the next generation of packet-based, quality-aware least-cost routing systems.
These OAM features are gradually being integrated into Carrier Ethernet network elements, and should be predominant in the gear serving Ethernet wholesale within a few years. Today, high-performance network interface devices (NIDs) already provide full OAM functionality using dedicated-silicon, hardware-based processing – ensuring that no delay or jitter is added to the traffic being monitored. These devices allow operators to introduce OAM capabilities over “legacy equipment” that may only offer a partial feature set of OAM using low-performance, software-based solutions.
When you see the technology and a need arriving at the same time in the high-volume, cost-sensitive business that’s telecom wholesale, you know it’s only a matter of time before performance and price get squeezed to the max.
It’s just good business.

Thursday, April 2, 2009

The micro-burst that killed my network

The IT department at the stock trading firm was under serious pressure. Trading times, carefully monitored by sophisticated monitoring systems, were becoming increasingly irregular – latency was creeping into the tens of milliseconds, twice as long as previous weeks, and certainly slower than competitors’ systems. Even without a glance at trading times the brokers knew something was amiss – revenue was dropping as they suspected their trades were being executed just behind others beating them to the markets.

But things didn’t add up – outside their company walls they were running at 50% of their available access link bandwidth, and network latency measurements showed far less than what the trade monitoring system reported. The same was true in the LAN and in the data center – each appeared to be working at peak performance. There was something they didn’t see, and it was costing them millions.
Today almost half of all trades executed globally are initiated and completed by computers, not humans. These algorithmic trading platforms, as they are known, constantly scan international markets for price discrepancies that offer a nearly instantaneous, guaranteed return to financial institutions that can move near the speed of light to buy and sell across global markets. It’s a proven and increasingly important strategy, but you have to be fast.
Although stock trading is an industry that is severely affected by network performance issues, many other verticals and are similarly affected – anything transactional that involves time or money feels the impact of delays, packet loss and capacity issues. Today we’re not just running networks that have to keep up with the speed of business, they define the speed limit.
So what was happening back at the broker? What were they missing? Knowing nothing had changed inside their company walls, they turned to their service provider for help.
The Microburst Phenomenon
Their operator, specialized in serving financial markets, had seen this scenario before, and luckily their networks were well instrumented. They had visibility on per-flow, one-way latency to microsecond resolution, but their measurements also turned up good results. That’s when they knew it was time to look deeper. By increasing the granularity of their utilization monitoring to a per-second basis, they discovered that even though bandwidth utilization appeared normal over their standard, five minute monitoring intervals, there were microbursts of data that went well beyond the commissioned bandwidth – up to 140% – if even momentarily.
They were measuring these traffic stats just before the trader’s data entered their network, before their network interface devices’ (NID’s) regulators, so they could see the micro-bursts that were resulting in very short-term packet loss – just a few frames dropped as the peaks hit. This small, almost negligible loss meant trading time would nearly double – the missing packets needed to be retransmitted to complete the buy or sell request, adding precious milliseconds to the transaction.
How did they fix the problem? There are a number of ways to react when you know micro-bursting is affecting application performance. A simple solution would be to increase committed or excess bandwidth or burst size limits, another would be to shape and smooth out any traffic not sensitive to latency or jitter sharing the same link. With advanced traffic monitoring, classification and per-flow conditioning at the ingress to the network, either the service provider or the end-user can optimize their service flows for bandwidth efficiency, performance, or a combination of both depending on the each application’s requirements.
Micro-bursts are not a new thing – they happen all the time, over all kinds of networks, and affect a wide range of applications. We just don’t notice them because of their highly transient nature. The evolution of Ethernet & IP / MPLS performance monitoring has reached a point where not only are measurements highly precise, they are also highly granular – we now have the tools we need to detect these short-term events and take action to ensure applications are running at peak performance. With these capabilities now available in cost efficient NIDs and monitoring platforms as a standard feature – they are easily in reach of both service providers and their customers. When performance puts your business ahead, you now have the ability to set your own speed limit and avoid those million dollar tickets.

Wednesday, March 25, 2009

Ethernet survival: Food, clothing, shelter... OAM?

Remember Maslow’s “Heirarchy of Needs” theory? Usually mapped out as a pyramid, the basic idea is that before we satisfy our more abstract desires we first need to secure the basics – survival necessities, and safety.

Ethernet, apparently, is somewhere at these survival and safety stages – really the bottom of the pyramid – at least when it comes to Ethernet business services and wireless backhaul applications. For anyone deploying Ethernet today, there’s a good chance they’re hunting and gathering information about key technologies they need to deploy to get their services up and running.
We all know Ethernet’s hit prime time – a survey released this month by Vertical Systems group says business Ethernet Services will grow from $10B into $39B market over the next 4 years – that’s crazy growth any way you slice it. They summarize their findings with, “Revenue from each of the regional market segments is expanding at a rate that’s more than double those of competing technologies.”
But in engineering and operations departments, it’s not a question of need, it’s more a question of survival – how do you cost effectively deploy reliable, resilient Ethernet services? It’s still a jungle out there if you’re doing the engineering.
What tools are they looking for? Let’s take a look at the survival kit they’re building.
Some insight comes from a survey we conduct on Accedian.com to customize monthly webinars that introduce key technologies for high availability, low latency, QoS-assured Ethernet services. Attendees complete the survey to tell us what they are most interested in learning about, and we adapt the content to their needs. The results are consistent from month-to-month, for telcos, MSOs and carriers, over North America, Europe, Africa , Latin America and Asia. Over the last 4 months, the pyramid of needs checks in as: (1) Ethernet OAM, 100% of respondents, (2) QoS monitoring, 89%, (3) MEF service mapping, 65%, (4) Edge aggregation, Traffic shaping & rate limiting, 55%, (5) Automated provisioning, 47%, and (6) Turn-Up, Loopback & In-Service RFC-2544 throughput testing, 43%.
The needs have changed as operators move closer to deployment – and from talking to the audience we can tell they are in the thick of it: starting to roll-out large-scale services or in the final planning stages.
A year ago you’d have seen Loopback & Turn-Up as the most important topics – reactionary troubleshooting and provisioning basics to get customers up and running [“survival-level” needs]. Today, Ethernet Operations, Administration & Maintenance (OAM) & continuous monitoring top the list [“safety-level” needs].
Service providers have shifted their focus from testing at turn-up to maintaining performance and reliability over the long-haul. They’re also looking at better ways to create, aggregate and optimize services – the mechanics of provisioning and bandwidth optimization. And deployment hasn’t been forgotten, its taken a new spin – instead of just turning up a circuit, they’re now more interested in automating it. The question we keep hearing is “how can we make deploying Ethernet as simple as possible?” There are big operational issues with wide scale deployment, and urgency in this area often reflects on how close these operators are to wide-scale roll-out.
Just as most of us no longer have food & shelter foremost in on our minds, service providers are focusing forward in the evolution of Ethernet: ongoing service performance, management and deployment automation are the needs of the moment. It’ll be interesting to see how fast these needs get satisfied – with this level of demand the answer I hear the most is: “Not fast enough.”