Home Blog Page 1149

Mobile gambling wagers to surpass $48bn by 2015, led by Chinese lottery deployments, says research

0

A new report published today by Juniper Research  has found that a combination of mobile casino, lottery and betting service launches in major emerging markets led by China allied to liberalisation of remote gambling legislation across the US and Europe will see the scale of annual wagers on mobile gambling exceed $48bn by 2015.

The report studies gambling services on a country-by-country basis. It finds that, in recent years, the Japan Racing Association’s iPAT service had been responsible for the bulk of global mobile gambling transactions, with casino/betting services in the UK accounting for much of the remainder. However, the sharp surge in adoption of the mobile lottery service launched by VODone will help propel China into third place in terms of mobile gambling transactions.

Meanwhile, the US market is also poised to see the introduction of its first mobile lottery services. According to report author Dr Windsor Holden, “State lottery providers are anxious to explore new distribution channels, with US lottery sales from traditional outlets in decline. The upshot is that several lotteries are in the latter stages of discussion with mobile technology providers with a view to launching mobile lottery services  in 2011.” In addition, the report observes that impending legislative changes in the US may herald an opportunity for mobile casino operators in the medium term.

ZTE handset shipment increases by 40% to 28 million in 1H 2010

0

ZTE, a global provider of telecommunications equipment and network solutions, today announced that it has shipped 28 million handsets globally in the first half of 2010, which represents a 40% increase year-on-year. This is attributed to the increasing market demands from international markets, as well as a spike in 3G usage in China and smartphone popularity worldwide, it says.

In the first half of 2010, ZTE had shipped 17 million handsets to the international markets (i.e. markets outside China), making up 60% of total shipments. This is an increase of 30% year-on-year. For the European market, ZTE recorded a year-on-year increase of 150%.

In the UK, the sales of ZTE handsets in Q2 2010 exceeded 1 million units, and the sales in France also exceeded 1 million in the first half of the year. ZTE partnered with top operators in Europe, with its custom-made Android smartphone handsets and WCDMA custom-made units.  

In the first half of 2010, ZTE launched nearly 10 types of Android handsets, including the Racer, Link and X850 models which have been adopted by H3G in the UK, Bouygues Telecom in France, VIVO in Brazil and other markets. In China, the ZTE X850 phone became one of the major Android mobile devices offered by China Unicom.

As a partner of Microsoft, ZTE is working closely on Windows Phone 7.0 product development after the global launch of the WM6.5 handset last October in Portugal and other countries.

A Mobile Marriage Made in Heaven: Capacity Meets Intelligence

0

Ronen Guri, director of business development, mobile backhaul, RAD, tells Keith Dyer that operators need to add sophisticated traffic management, performance monitoring and timing over packet technologies to their packet switched backhaul networks to enhance their end-to-end mobile SLAs.

Keith Dyer:
Ronen, what changes have you seen recently in the way your operator customers are addressing their backhaul requirements?

Ronen Guri:
Well if you were to go back a bit, clearly we would have seen mobile backhaul on a TDM/SDH infrastructure using T1/E1 interfaces. But then, as we all know, came 3G and the move to HSPA, which brought with it a change in the volumes and types of data traffic.

So then we started to see more and more mobile operators move to Ethernet to handle packet-switched backhaul.
Some started operating a hybrid model, switching voice over those legacy connections and data over the packet switched network. But we are actually now seeing more and more mobile operators moving today to take all their traffic, voice, data, and signaling over the packet-switched network. However, this move to  packet-switched backhaul   has seen an emphasis on providing a fat pipe, not a smart pipe with enhanced class of service (CoS) and traffic management support. In some cases you will see some basic CoS support in the backhaul network but not  end-to-end control.
We have observed in all cases that allied to this, is the fact that  mobile operators will require SLAs for their backhaul networks. SLAs were an inherent part of TDM technology, second nature to it.
For historical reasons, they aren’t in packet technology and need to be recreated. This is done by deploying equipment engineered to support performance standards, high availability and throughput thresholds and monitor backhaul network behavior at all times so that both the mobile operator and the mobile backhaul provider, whether it is a wholesale carrier or the transport division of a fixed-mobile operator, can assure the end-to-end SLAs that they are commited to, before they impact subscriber services.
The big question facing everyone is how the backhaul transport provider can guarantee an end-to-end SLA for the mobile operator’s network.

Keith Dyer:
So if the move is from fat but dumb connectivity to smart backhaul, how does the industry equip itself to do that?

Ronen Guri:
Our main proposal, which works both in the case of a wholesale transport provider and for operators with self-owned backhaul, is to equip transport networks with Mobile Demarcation Device (MDDs). These are located at each and every point of interface between the mobile operator and the transport operator – depending on the network topology they can be at a cell site, a hub site or an aggregation node. These MDDs are devices that can do all that is needed to guarantee the SLA of the network: hierarchical traffic management, sophisticated performance monitoring, threshold testing and turn up, timing over packet and service validation tools.

Keith Dyer:
Hold on a moment. Most regional mobile operators have thousands, if not tens of thousands of cell sites. Can transport providers afford the CapEx for MDDs? 

Ronen Guri:
I’d say they can’t afford not to. The alternatives don’t have the technology to support iron-clad SLAs. Of course, the  MDD has to have all the technologies required at a price point that enables transport providers to deploy in the tens of thousands of cells that you mentioned. That’s not trivial

Keith Dyer:
When you talk of monitoring SLAs in the backhaul network, what is required to do that?

Ronen Guri:
You need to build parameters to provide SLAs for three or four or eight different classes of service, so each and every CoS has guaranteed availability, packet loss, delay and delay varation. In order to do that you need to provide traffic management and scheduling for all CoS across each and every EVC in the network.
If you can guarantee the SLAs in this manner then you can allow sophisticated traffic handling in the network, and that allows wholesale providers with several mobile operator customers to provide different SLAs according to what they all require. Another aspect that is required is the ability to be able to provide end-to-end monitoring and fault detection of the network itself. That requires an end-to-end view so that if any device has a problem, you can use an alternative path in the network. The requirements here are for fast detection times to boost the resiliency of the network itself as we are talking about measuring delay, delay variation, packet loss and the availability of the network, amongst other parameters, across every CoS within every connection.

Keith Dyer:
When you talk of assuring services across all packet-switched architectures, that always raises the question of timing and synchronisation. Does the MDD provide a solution to that issue as well?

Ronen Guri:
Yes, since packet-switched networks do not include inherent synchronisation mechanisms, they require complementary clock transfer solutions and nowhere is this issue of greater importance than in mobile backhaul. To ensure service quality for mobile traffic and avoid dropped calls, mobile networks require exceptionally stringent phase and frequency accuracy, which calls for equally exceptional high performance clocking capabilities. We have announced that our MDDs build on RAD’s position as a leader in timing over packet. RAD’s MDDs are equipped with the SyncToP suite of synchronisation and Timing over Packet technologies, which includes Synchronous Ethernet, IEEE 1588v2 (1588-2008) Precision Timing Protocol and NTR that are designed to provide clocking accuracy at SDH/SONET levels over packet transport. One reason why this versatility is crucial in an MDD is that for various reasons the mobile operator and the transport provider may be using different synchronization technologies in their respective networks.
Therefore the MDD must be equipped for every contingency. Another possibility is that the transport provider might want to sell timing capabilities and for that he needs timing support and interfaces in the MDD. On the other hand,  if the timing is provided by the mobile operator, the transport provider needs to have 1588-TC support in-order to provide a better timing transport network.
Keith Dyer:
So you have enhanced SLA assurance, timing over packet functionality and traffic management features across the backhaul network, and that translates into the ability for operators to be able to manage their networks more efficiently and profitably.

Ronen Guri:

Absolutely. The important thing to realise is that when it has come to validation of services, it is the backhaul that has been the tricky part. The role of the MDD is to make available in the backhaul all the capabilities operators are used to having within the RAN and the core.

Keith Dyer:
And finally, what changes to the backhaul do you think we will see with the introduction of LTE?

Ronen Guri:

It’s still too early to tell , but with LTE deployment it’s important to note that the topology can change completely to a mesh architecture. More cell sites will be deployed and they will be talking to one another as well as to the network core. Data traffic patterns will change in the backhaul and transport networks will have to be optimized to respond to these shifting patterns.
This is definitely not a problem that can be solved by capacity alone —  with  our focus on smart pipes and on end-to-end SLA assurance, it’s clear that LTE and MDDs make enormous sense together.

Assuring Ethernet services in mobile backhaul

0

The introduction of IP/Ethernet solutions into mobile backhaul networks gives operators not only new opportunities but also new challenges as they need to validate and assess the performance and the quality of their services. Vikas Arora, CTO of EXFO, tells Keith Dyer how using the right test and service assurance solutions will be crucial to the success of operators’ backhaul migration strategies.

Mobile Europe: Vikas, what are the overall priorities of your customers, the network operators and backhaul network providers, as they consider migrating their legacy mobile backhaul infrastructure to IP/Ethernet?

Vikas Arora: The key thing is cost reduction. Currently backhaul is predominantly carried over T1/E1 lines and that represents the single largest area of OPEX spending operators have within their networks. We tend to hear a fairly constant number from operators — that somewhere around 25-30% of their overall OPEX goes into the network operations, with backhaul forming a significant part of that.

So it’s not a secret that operators are looking to reduce the cost of mobile backhaul, and as a part of that they are looking to move to a more scalable and cost-efficient technology like Ethernet to support future bandwidth requirements and to protect future investments.
Operators also know that most outages originate from the mobile backhaul network, with backhaul accounting for 40-60% of problems, compared to 15% due to the base station and just 5% in the RF. So alongside the drive to lower OPEX, operators also need to increase the reliability and availability of their backhaul networks.

Mobile Europe: Yet as they do that, they are faced with renewed challenges in terms of assuring and validating services, which are encapsulated within Ethernet in a very different manner from TDM networks.

Vikas Arora: Yes, IP/Ethernet presents operators with a totally different environment from TDM, meaning that operators need to move from just testing network performance, which was acceptable within TDM, to testing network and service performance within Ethernet.
Additionally, despite the fact that operators are introducing this new networking technology, they know they can’t afford to take too much time to stabilise those networks, and they also cannot have the technology take too high a financial toll on their methods and procedures. That means they need to look at whether they have the ability and the tools to manage the user experience across the network in a centralised and scalable manner, whichever architecture they have chosen.
So in terms of operators’ requirements, as they move to packet-based IP/Ethernet backhaul networks, they need to make sure that they have three things. First, access to the right tools to help them manage this new environment. Second, they need to train field and operations staff to ensure their expertise and thinking evolves from TDM to IP/Ethernet. Thirdly, they need to update their service turn-up test and troubleshooting procedures to align them with the IP/Ethernet service activation requirements.

Mobile Europe: You have mentioned that operators will need new capabilities to be able to manage the introduction of this new service environment. What are the tools they will require to ensure SLAs across their IP/Ethernet networks?

Vikas Arora:
Well, I think it pays to step back a bit and have a look at the full network life cycle, from network construction and service turn-up, through the “burn-in” period, and then to operational service monitoring and trouble shooting.
Some predictions are that backhaul bandwidth requirements per tower will grow from 10Mbps in 2009 to 100Mbps by 2014. To deal with this exponential growth in data demand, we are seeing the deployment of more fibre to towers. As operators drive this fibre to the towers, they then need the right tools to be able to characterise the fibre and validate the link performance, to ensure that IP/Ethernet services can be delivered properly.
The next phase is the service turn-up and burn-in. In this phase, with the transition from T1/E1 based backhaul to Ethernet, operators need to test and troubleshoot a lot more parameters than they test when turning up T1/E1 circuits. This is the “totally different environment” I referred to earlier.
Let’s look at an example. For T1/E1 connections, service turn-up involved testing the Bit Error Rates (BER) and round trip delay measurements. Yet for Ethernet-based backhaul services the tests required are much more detailed. The parameters operators need to consider in this environment are, first of all, measuring throughput, frame delay (latency), and burstability. These elements are all critical for interactive voice, video and many data applications. The time sensitive nature of non-buffered video, and other data applications such as VoIP, also means that the measurement of frame delay variation, or jitter, is vital.
Then there are frame loss measurements. Frame loss ratio is crucial for voice and video quality and the capability to measure it over time will provide an idea of the long-term integrity of the service. You cannot do this with a simple “ping” test of the network.
Finally, Ethernet brings with it the need for VLAN and Class of Service (CoS) validation. Each Ethernet virtual circuit (EVC) will support more than one class of service to handle different types of IP services, so the testing and validation needs to be done for each VLAN and class of service simultaneously.
This means that to assure quality of service within an Ethernet environment, operators need the tools to be able to, first, validate the network configuration for each defined service (traffic shaping, rate limiting, etc) and, secondly, to validate Quality of Service KPIs and prove SLA conformance. The output of this phase is a service “birth-certificate” with all the service parameters tested and the SLA signed off.
The next phase is to move beyond that initial birth certificate into long-term, post turn-up operational performance and SLA monitoring and troubleshooting. This phase deals with the need to provide 24×7 measurement of KPIs, alerts and alarms on service degradation, and aggregation and analysis of KPIs.

Mobile Europe:
I understand there is some movement in Ethernet testing methodologies, to be able to provide service activation and ongoing service assurance.

Vikas Arora:
That is a very good point. RFC 2544 is the current Ethernet service turn-up testing methodology yet RFC 2544 was designed for network device testing in the lab and not for service testing in the field. It does not include all the required measurements for today’s service SLA parameters, like jitter, out-of-sequence, QoS measurements for multiple concurrent service levels, etc. Moreover it provides a snapshot view, and does not adapt to long term measurements. As it is a sequential test methodology, it is also very time consuming.
To address these limitations and to reduce the operators’ OPEX, we, working with number of other industry players, have proposed a new Ethernet testing recommendation called ITU-T Y.156sam (service activation methodology).
For on-going service assurance, OAM standards play a critical role. The OAM standards, 802.3ah and 802.1ag/Y.1731, are just being adopted by the equipment vendors. 802.3ah deals with the link fault management whereas 802.1ag/y.1731 deals with connectivity fault management and performance monitoring. It provides us with connectivity check messages to monitor the heartbeat of the network, loopback messages to determine where a fault is, and link trace messages to find the affected path. All these checks need to be done for each VLAN/CoS within an EVC, so it’s clear that operators need a scalable solution which leverages these standards and provides an ongoing service assurance system, with the right technology support, to monitor SLAs and rapidly reduce the time-to-fix in troubleshooting.

Mobile Europe:
Where is the industry in terms of  ITU-T Y.156sam adoption?

Vikas Arora:
It’s still early days, but because Y.156sam addresses operators’ needs to speed up service turn-up and reduce OPEX, we are seeing widespread support. There have been some very good contributions from the industry to make sure the Y.156sam recommendations address all the operators’ key needs and that this standardised approach becomes part of operators’ service turn-up methods and procedures. This standard is currently in draft form and is expected to be ratified by early 2011.

Mobile Europe:
So you’ve built up a picture of why operators must consider a new approach to their rollout of Ethernet backhaul, allowing them to have optimised turn-up procedures and ongoing performance monitoring for multiple classes of service in a cost-efficient manner. How would you sum up EXFO’s role in meeting those goals?

Vikas Arora: There are two clear messages we want to deliver. First, at EXFO we look at the full network life-cycle and are delivering the right tools and solutions with the right level of automation and standards support. We are also playing a leading role in the standard bodies to help evolve the methods and procedures which address Ethernet mobile backhaul needs as operators move from one phase to another.
Secondly, to improve operational efficiency and productivity, we are working with operators to provide solutions which help capture and store field test, troubleshooting and service turn-up birth-certificate data right through the life-cycle so that this can be correlated with the service performance data collected during the operational performance monitoring of the Ethernet service.
Our end goal is to deliver a future-proof, standards-based, solution to the operators to help reduce OPEX, improve productivity and deliver operational efficiency and actionable intelligence.

 

Assurance in a service-orientated world

0

Operators are moving to LTE and Ethernet backhaul in an effort to reduce their cost per bit, and overall opex costs, but they face challenges to be able to monitor and assure services in the new environment. Jay Stewart, Director Ethernet Service Assurance, JDSU, tells Keith Dyer how they can meet those challenges.

Keith Dyer:
Can you outline what the issues are with service assurance in Ethernet, and how that applies to mobile backhaul?

Jay Stewart:
The pressure to succeed at backhaul has never been greater for service providers – the surge of mobile data traffic and associated cost of its transport has created a number of significant business challenges.

The important, inevitable step of updating a backhaul network from TDM to Ethernet is not about “if” but about “when.” A big problem is that Ethernet was not originally designed for carrier-grade use. Backhaul brings a new dimension to the testing of Ethernet. The introduction of Ethernet to the backhaul adds new complexity, in terms of the physical speeds required and in terms of the new provisioning and set up for the service attributes of that space.

Keith Dyer:
What are the challenges that operators face and how can they overcome them?

Jay Stewart:
Service providers are facing difficult operational challenges with the deployment of Ethernet services for mobile backhaul. The turn-up of 3G and 4G is seeing the offloading of data on to Ethernet, with providers having multiple Ethernet Vitutal Circuits (EVCs) – each one with different Class of Service (CoS) attributes. Those providers are being tasked to monitor that traffic and apply different shaping and management end-to-end across different network elements. That’s been the big change that backhaul has brought to Ethernet testing.
In the past performance monitoring operated at the network level and was relatively simple – but now with next generation elements, performance monitoring is a much bigger issue as operators deal with multiple QoS mechanisms. The concept has moved from being
network based to being service based, which does not equate to what people have used in the TDM world.
LTE for instance very clearly calls out nine QoS classification IDs, and how to manage that is also outlined in 3GPP. So as well as an increase in the amount of data the network has got more complex with more QoS mechanisms, and with everything being IP based.

Keith Dyer:
So that puts operators in a very different place from where they were before. Can they lean at all on their prior methods of assurance?

Jay Stewart:
I think at a high level they can lean on things they have been used to doing – but  in going to Ethernet they are necessarily going to have to move to a services based approach to performance management, managing each EVC as a separate link, and managing the multiple QoS mechanisms within each EVC. In each case they must manage each one as if they were a separate SLA.
Operators have found that they have been able to get the architecture to work, but being able to carry out their test and performance monitoring has not been completely thought through. One customer said to me that they have taken 30 years to build their network, and now they are asking me to convert that to Ethernet in three to four years with fewer resources. So they are facing more services over more circuits with fewer resources, meaning a focus on reducing opex and lowering the overall cost per bit.

Keith Dyer:
So operators need solutions that are specifically designed for this Ethernet service environment?

Jay Stewart:

That is right, and for instance our NetComplete Ethernet portfolio, which covers the entire Ethernet services deployment lifecycle, is designed to provide the necessary functions to ensure reliability during the mobile service provider’s Ethernet backhaul transition. These functions enable not only the transition to HSPA, HSPA+, and LTE but also the accurate identification and troubleshooting of service-affecting issues, including Ethernet services turn-up testing and validation combined with service performance and SLA monitoring. By monitoring service performance and SLA conformance a mobile service provider can proactively monitor and manage the service and avoid costly service outages.

Keith Dyer:
Do you have any customer examples of an operator making this transition from a network to service based approach within the Ethernet backhaul network?

Jay Stewart:
I’m afraid client confidentiality means I can’t name names, but JDSU does collaborate closely with its customers for innovative solutions to backhaul. We did recently publicly announce that we were selected by a major mobile communications service provider to support its mobile backhaul transition to Ethernet. So that is one major customer where our NetComplete Ethernet service assurance solution is enabling the quality deployment of bandwidth-intensive mobile services and applications, including mobile digital video and streaming video.
The NetComplete solution also gives the mobile service provider end-to-end automated service turn-up testing and performance monitoring that is standards-compliant with RFC-2544, IEEE 802.1ag and ITU Y.1731, eliminating issues related to network element interoperability and provisioning.

Keith Dyer:
Which takes us into the thorny discussions around OAM standards development…

Jay Stewart:
There has been confusion around the standards. Right now there are standards being developed through the ITU, but they are still not yet being put into network elements, and I think really that is still 12-18 months out. We’re in that grey area of working our how to manage things while the standards gain approval and acceptance within the vendor community.
One problem has been that a lack of standardised OAM tools and procedures, coupled with these more complex IP networks, have led to increasing numbers of technician dispatches for service turn-up, fault finding, troubleshooting, and resolution of service degradations.

Keith Dyer:
Do you feel that there is a danger that mobile backhaul decisions are being made to meet the imminent needs of operators, and that long term planning is taking a back seat?

Jay Stewart:
I think people are living in the now. They know they have got to get bandwidth out to the towers, and they cannot afford to really think about what happens when they move to 4G. LTE brings a big architecture shift, with new interfaces added. But I don’t think it will take away any of the efficiencies we have identified in evolving the performance management environment. We see a roadmap that still needs a lot of work in terms of getting the architecture and services to work – but the main driver is still to reduce the total cost of ownership.
A lot of our work is consultation aimed at that – by bringing an abstracted layer of management that doesn’t have to change as our customers move forward we don’t have to change the way we take data from the network elements. It’s an automated and flexible approach that will stand the transitions that operators need to make.

Supporting the Ethernet transition

0

Operators have changed the way they are approaching the transition to IP/Ethernet in their backhaul networks, and that has meant vendors have had to respond accordingly, Sten Nordell, CTO of Transmode, tells Keith Dyer.

Keith Dyer:
Hello Sten. A year ago you told us of Transmode’s policy of supporting TDM and Ethernet protocols natively over fibre to allow operators to have a flexible method of backhauling traffic through their network. How has that product policy worked out and what developments in the market have you seen since then?

Sten Nordell:
Hello Keith. It has been a very interesting and exciting year. We’ve just launched our new products in June and we already have mobile backhaul customers in both Europe and North America using them.

I think the biggest change in the market has been that two years ago it looked as though there was going to be a lot of implementation of all-packet based networks, but there is now the sense that those solely packet-orientated developments have largely been slightly held back.
In line with our proposition, we have indeed seen a thrust for operators doing anything that is not solely TDM-based, and indeed 3G and LTE developments are still driving that. What has changed is that two to three years ago people thought a lot of operators would develop a packet-only based strategy. However, what caught everyone out is that they have simply not had the time to make that full transition. The bandwidth growth was simply too great in the short term to wait for complex, long term migration projects.

Keith Dyer:
So last year, although we talked about the fact that operators were running, or leasing, more fibre to base stations, in fact what has happened is that the need for bandwidth now outstripped that capability to do that.

Sten Nordell:
To an extent, yes. We’re still seeing people talk about putting more fibre in the ground – but that takes time. So we’re also seeing providers use anything they can to offload that TDM traffic until they can roll out fibre, whether that is Microwave or whatever else is available. Estimates vary from about 30% to 50% for fibre deployments to base stations and if anything this is increasing due to the surge in bandwidth demand.
What we see is that customers are using the assets they have where they have them, rather than redesigning their whole networks up front. They are using their legacy capacity just to get TDM traffic across the network, and offloading much of their data using Ethernet. So the transition to the full IP environment is taking longer than expected.

Keith Dyer:
Do you sense that there has been a technical reason for that delay, or has it simply been about time to market of the technology that is available?

Sten Nordell:
The reasons have been technical and time to market, as you put it. TDM runs all the signaling and sync for voice traffic, and by putting in a hybrid Ethernet architecture it eases the complexity of the implementation of an all IP network. Operators do not have to deal with the complexity of emulating that sync and signaling over Ethernet. They can focus on a targeted architecture for their packet switched traffic, and that is much easier to co-ordinate. So the sync discussion goes away by keeping the legacy mechanism, and you can still run Ethernet at the same time. It’s faster to build your data-focused network this way, meaning operators can meet their immediate needs. Operators know that this is a temporary solution – but it is better to do this than to wait.

Keith Dyer:
So has that meant that your previous product strategy, of natively transporting TDM and IP traffic over fibre, has had to be modified?

Sten Nordell:
Not really. Our model still fits because that flexible model is what people are doing. Our understanding is that it is too complex to deal with complicated Pseudowire and other approaches. So the basic idea of our product solution was, and still is, right. It’s too complex to make that network change with the growth rates operators have. Our approach of carrying traffic natively over fibre gives operators the ability to put only their Ethernet traffic over fibre in the first instance. It also gives them the ability to remove their legacy access when they are ready, but also keep the legacy transport protocol. That allows a migration when they are ready, versus a straight overhaul to an all-Ethernet approach.

Keith Dyer:
So has this approach led to any changes in the solutions you offer?

Sten Nordell:
Yes, we have taken the decision to also offer enhanced Ethernet support for those customers that are initially deploying Ethernet only. Once you have made up your mind to keep the legacy network, then the way you design the packet network is subtly different. These enhancements can be found in the second-generation version of our Layer 2 Ethernet Muxponder (EMXP), the EMXP II, which includes increased processing power and support for a wider range of Layer 2 features. The enhanced EMXP II provides Layer 2 Ethernet service provisioning and aggregation for transport applications with extremely low sub 2 micro second latency and virtually zero jitter. This makes the unit ideally suited to Ethernet applications where latency and jitter are important, such as services for financial institutions, video distribution and of course mobile backhaul.
To add to our Ethernet focus, we’ve also added a range of Ethernet Demarcation Units (EDU) to our TM-Series. These EDUs are also designed to help operators gain the lowest possible latency and jitter, making the units ideal for rich data applications. The range includes units that provide either demarcation of a single Gigabit Ethernet client signal or aggregation of up to 4 client signals onto a Gigabit Ethernet line within the compact CPE device.
When I talk about enhanced Ethernet functionality the EDU is a good example. It includes service mapping, which allows providers to quickly create Ethernet Virtual Circuits (EVCs) and services can be mapped with advanced bandwidth policies that establish committed and excess throughput and burst rates as well as defined Ethernet & IP service priorities. Also as the hardware based units they can maintain up to 100 Y.1731 & 802.1ag OAM sessions per unit, enabling highly accurate and precise OAM performance monitoring for large-scale wireless backhaul and business service applications.

Keith Dyer:
You mention performance monitoring there. It seems to be a high value topic around Ethernet services at the moment. Is that an additional cost that operators perhaps didn’t envision, in their backhaul plans.

Sten Nordell:
It’s crucial that operators can perform SLA verification and assurance, which means operators must have OAM capabilities deployed at the edge, at the base station itself, to allow fast forwarding and fast packet reading, for instance. The EDU is where all of this happens. But it’s not a major cost to them, and pulling the fibre to the base station is still likely to be 98% of the cost. Perhaps operators hadn’t envisioned having one device for terminating Ethernet and one for terminating TDM, but on the other hand it’s a lower cost than an all-IP approach and the complexity that brings, as we discussed earlier.

Keith Dyer:

So when do you see operators moving to this enhanced Ethernet solution, and what do you think Transmode’s advantage is in this market?

Sten Nordell:
We’ve just made both the Native Ethernet and TDM Multi-Service Mobile Backhaul product that we discussed last year and the enhanced Layer 2 Ethernet product commercially available this summer. So it’s a bit early for us to talk about shipping in volume, but we already have mobile backhaul customers in both Europe and North America, as I mentioned earlier. .
In terms of the advantages that Transmode brings to the market, we can help both those that have integrated their networks via the Multi-Service approach and those that haven’t. Those mobile operators that have not integrated their networks, are out shopping for solutions and most of them are not after Pseudowire, but are after pure Ethernet.
They are also not after Enterprise-optimised products that have been tweaked to the carrier market. Our focus is on carrier class, enhanced Ethernet solutions, and I think that’s the difference we provide.

LTE economics dependent on RF and antenna design in the handset, says study

0

RTT, a specialist independent RF consultancy, has today released the findings of its latest research into the impact of user equipment RF and baseband performance on the economics of mobile broadband. The research is said to have shown that a relatively small investment in user equipment optimisation can have an impressive knock on effect on network efficiency and value. RTT believes that this is a prerequisite for industry profitability.

The study has also forecast that data traffic will see a thirty-fold increase in volume over the next five years (from three to 90 exabytes (a million terabytes)). However, over the same timescale, based on present tariff trends, income will have increased at the most by a factor of three, it says.

“LTE operators need to listen carefully to the findings of this study,” said Geoff Varrall, Executive Director at RTT and lead author of the study. “The move to LTE, if coupled with investment in band flexibility and improved year on year user equipment efficiency, can provide a positive and improved return on present and future network investment, or put another way, allows operators and carriers to make money out of data volume growth.”

While some operators are looking to simply increase network density, the problem with this is that capital and operational costs increase through a composite of site acquisition, site rental, site energy cost, hardware and software investment and backhaul costs, says RTT.   Using subscriber ADSL lines as backhaul via femtocells partly provides a solution to the back haul cost issue and can increase local area network density in a cost-effective way at the subscriber’s expense. However femtocells address local area access economics not wide area access economics. Similarly MIMO achieves high peak data rates in small cells but if poorly implemented in user equipment compromises SISO performance in larger diameter micro and macro cells where average data throughput is more important, having a more direct impact on the user experience, it says.

Instead the combination of user equipment efficiency and extended multi band support will have a more positive impact on mobile broadband radio access economics than commonly assumed or presently modeled. Implementing additional bands should however only be considered if performance in existing bands is maintained.

The study also concluded that it is also important to address the growing disparity between bench top measurements and real life user equipment performance. This is resulting in best to worst differences of 7dB in user equipment performance. A network cannot be dimensioned to meet defined user experience expectations with this degree of performance uncertainty. RF and baseband innovation help to reduce this difference. Changes to conformance and performance measurement methodologies are also required to reduce the gap between what is measured on the bench and real world performance.

There is clearly a demand side opportunity if delivery cost, investment, innovation and test issues can be addressed, it says.

The collaborative industry, technical and commercial study quantifies the relationship between user equipment performance, quality of service, quality of experience and the operator and industry EBITDA (Earnings Before Interest Tax and Depreciation of Assets) return. It is sponsored by Peregrine Semiconductor and Ethertronics and it uses as a starting point, a study RTT undertaken for Vodafone UK in 2009.

Evolving wireless technology drives growth in world communication test services market, says report

0

The increasing complexity of wireless networks, bolstered by the constant introduction of technologies demanding obscure skills, has greatly enhanced the need for communication test services, according to new analysis from Frost & Sullivan.

Because standards in the communications market are constantly evolving, triggering the improvement of test equipment and corresponding services, suppliers should improve their service portfolio to address volatile market conditions and the subsequent needs of end users, it says.

The report – World Communication Test Services Market – finds that the market earned revenues of $472.2 million in 2009 and estimates this to reach $677.9 million in 2015. The end users covered in this research service include network equipment manufacturers (NEMs), service providers (SPs) and enterprises.

“SPs opine that the demand for test services in wireless technologies is greater than that of wireline technologies, due to their complexity and susceptibility to problems,” says Frost & Sullivan Research Analyst Prathima Bommakanti. “Likewise, the need for test services is expected to increase as wireless networks continue to become more complicated.”

Due to the introduction of long-term evolution (LTE) and worldwide interoperability for microwave access (WiMAX) technologies in the network, the overall complexity of networks is escalating, it says. This is particularly evident in the operation stage and during the transition of technology in the network. Moreover, the deployment of LTE and WiMAX is likely to increase the service requirements of operators as they are expected to incorporate mobile broadband, video-on-demand and other high-speed services that will boost the revenues for network operators.

One of the most consistent trends has been to offer voice over Internet protocol (VOIP), data centre and Internet protocol television (IPTV) instead of an integrated triple-play network. However, with SPs transitioning from separate voice, video or data services to integrated triple-play services, there is a need for equipment and services to test and monitor networks that supply voice, video and data simultaneously, says F&S.

Despite promising potential, the economic downturn significantly hindered the growth of the total world communication test services market. Capital budgets remain tight, as SPs, NEMs and small and large enterprises attempt to generate more revenues out of their existing networks, creating a delay in their purchasing decisions. Likewise, vendors should assure the return on investment (ROI) for services offered, as end users have reduced capital expenditure (CAPEX)/operating expenses (OPEX) budgets.

“Providing superior services that match customer requirements at competitive prices is one of the key challenges faced by suppliers in the total world communication test services market,” explains Bommakanti.

Making the case for fairness

0

We hear about tiered data pricing all the time. It is, we are told, the only escape route for operators facing the data revenue gap. Operators have to start charging realistically, we are told. But we are also told that operators could limit the negative effect this might have on users if they can convey these new tiered pricing structures in a way that is relevant and personal to the user.

One theory that has been proposed has been around fairness. What if caps, limits and tiered pricing based on quality of service were explained in terms of fairness. Would users react better to that than just a blunt cap?

So we thought it would be interesting to test these thoughts with some real consumers. To do that, we enlisted the help, for which we are very grateful, of Ryan Garner, Research Director at GfK NOP Technology. Garner asked similar questions of a GfK sample of 1,000 representative mobile users.

What he found was perhaps slightly contradictory, and perhaps not all that good news for those advocating the tiered, class of service-based data pricing approach.

When GfK said, “Mobile data usage should be charged on a sliding scale so you are charged depending on how much you use” only 37% agreed or strongly agreed. Similarly, only 27% thought it a good idea that network operators would offer guaranteed service levels for mobile data usage at peak times for customers who are willing to pay an additional monthly fee. And only 30% agreed with the counter to that – that operators would offer discounted rates on mobile data tariffs to those who are happy with a limited service at peak times of the day

On the other hand, there was broad agreement that network operators should work out a fairer way of charging for mobile data usage so that the majority of users do not end up subsidising the minority of heavy mobile data users. 66% of all users, and 80% of iPhone users incidentally, agreed with this.

So what is fair? Only 34% of our sample thought caps were fair. But as we’ve seen, they’re not keen on QoS based tiered pricing either.

In other words, people think fairness is a good idea – but they’re not sure how to get there yet. Pretty much a reflection of the industry itself, then.

Keith Dyer
Edtor
Mobile Europe

 

http://www.mobileeurope.co.uk/news/news-anaylsis/8067-the-fairness-principle

Next generation microwave solutions for the delivery of LTE https://www1.gotomeeting.com/register/850992737

http://www.mobileeurope.co.uk/news/press-wire/8066-intel-to-acquire-infineons-wireless-solutions-business

Stand by your RAN https://www1.gotomeeting.com/register/709582521

http://www.mobileeurope.co.uk/news/press-wire/8064-gsa-reports-growing-momentum-as-lte-network-commitments-move-past-100

The fairness principle

0

Mobile Europe recently asked Ryan Garner, Research Manager of GfK NOP Technology, to undertake some research on our behalf. What, we wondered, would consumers make of data caps and limits if they were explained to them in terms of fairness?

Garner thought that was in interesting proposal, and drew up some questions to put to 1,000 UK consumers. His findings are below, written up by him. Our thanks go to Garner for undertaking this work, and sharing the results with us.

The importance of fairness and clarity in mobile data charges

The decision by UK network operators to cap mobile data allowances hasn’t been the most popular move with UK mobile phone users. Recent research undertaken by GfK on behalf of Mobile Europe shows that only a third (34%) of UK mobile phone users agree that capping mobile data usage is the fairest approach.

Despite these low levels of concurrence, very few people actually disagree with capping data usage as most mobile users (49%) have no opinion either way. This indecisiveness from UK consumers shows that whilst they don’t want constraints on their usage they realise it has become necessary.

What consumers are clearer on is how network operators should communicate mobile data charges and the need for a fairer way of charging for this usage. Two-thirds (66%) of UK mobile phone owners believe network operators should design a fairer way of charging for mobile data usage. Consumers want a system that ensures that the majority of average users do not end up paying for the minority of heavy users. Interestingly this level of agreement is more pronounced among smartphone owners (71%) and particularly among iPhone owners (80%) and Blackberry owners (80%). These types of smartphone users are now reliant on their phones as they have become a fundamental part of their day-to-day lives. The apps and services available on smartphones  help people get things done and have more fun which means consumers simply don’t want to regress and give these new found benefits up. Whilst people accept that usage of an ‘always on’ smartphone adds extra strain to the performance of an operator’s network they just want a fairer way to pay for their usage without compromising on the quality of service.

Heavily linked to this is that almost three-quarters (71%) of all mobile phone users want Network Operators to be clearer about how much they’ll be charged if they go over their data allowance. Again this view is more pronounced among smartphone users (79%) particularly iPhone owners (89%) and Blackberry owners (82%). This is not surprising as historically iPhone owners have been used to unlimited data tariffs and not had to monitor their mobile data usage. Now that caps have been set, they simply don’t know how much their smartphone usage habits will cost. Essentially, for the majority of consumers their love affair with unlimited tariffs was more about controlling their monthly costs. As unlimited data plans have been scrapped consumers who rely on mobile services and apps that require mobile data are unsure and what their monthly costs will amount to.

So what is the solution? How do network operators charge for mobile data that is fair for the average user and helps them control their monthly outgoings but at the same time deters excessive usage?

There’s no silver bullet that will solve this problem as different consumers have different needs and have been used to certain ways of paying for the mobile data usage. We tested a few possible solutions, the most popular with 37% appeal (48% among smartphone owners) was charging for mobile data on a sliding scale in line with their usage. This would work by signing up to a plan where the costs of mobile data usage fall within bands i.e. up to 250mb = £2.5 per month, up to 500mb = £5, etc. The consumer would know upon signing-up how much they’ll be charged within each band. The key for this working (and possibly the reason why appeal isn’t higher) is that operators need to notify (preferable via SMS) their customers on how much mobile data they’ve used and when they enter a higher rate band.

This way consumers can keep an eye on how much they’re spending and manage their monthly bills more effectively.

The other two options we tested were focused on trading off cost against performance and these were less appealing. The first option offered discounts to people who would accept a limited service at peak times and this only received an appeal rating of 27% among UK mobile phone owners (39% among smartphone owners). The second proposition which received an appeal rating of 30% (40% among smartphone owners) offered ‘guaranteed’ service levels at peak times for a premium price.

These low levels of appeal suggest that ‘fairness’ for users of mobile data is more about operators offering a good ‘value’ service and less about financial incentives. People who rely on mobile data are seemingly less price sensitive and would rather keep the level of service consistent with what their used to. This is most evident among iPhone users who have been used to unlimited data plans since the iPhone was launched back in 2007.

iPhone users find all alternative methods of charging for mobile data less appealing when compared to the average smartphone user.

iPhone users are less compromising when it comes to anything but unlimited data.

* The percentages in the tables above are levels of top 2 box agreement or appeal on a five point scale.

- Advertisement -
DOWNLOAD OUR NEW REPORT

5G Advanced

Will 5G’s second wave deliver value?