More
    HomeMobile EuropePulling it all together

    Pulling it all together

    -

    service assurance

    Jan Lindeberg explains why testing procedures are vital for a successful multimedia network and, just as importantly, for a good user experience of the services that run on them.

    There are many indications suggesting that data services are making it into the mainstream segment with a value proposition that makes consumers prepared to pay for them. But how do you secure end-to-end service quality in order to avoid a representation of the previous failures with WAP and GPRS, when the underlying technology is vastly more complex today than ever before?

    The radio network planners know it, as do core network operations staff, content service and partner managers; the complexity of current and future 3G networks and value chains is a tremendous challenge for most network operators. It might not be rocket-science, but making the wireless triple play work seamlessly for millions of users may be one of the best earthly candidates to tangent the complexity of space ventures.

    The fact is that today’s 3G service offerings rely on a highly complex value chain. The multimedia services chain consists of the user equipment, the radio access network, the packet-switched core network, service application and aggregation platforms, and last but not least a wide range of content partners. Together with continuous terminal advances, upcoming technology enablers (such as HSDPA SIP-based IMS service capability platforms, DVB-H and even emerging technologies such as MBMS) will fuel the complexity even further.

    But why is it that the complexity seems to grow almost exponentially with the wireless network generation and what can be done to manage it? It is of course natural that the multiplicity in the number of applied technologies in itself will increase the complexity, as these technologies must be made to work together with various mapping, correlation and gateway functions. It is also true that the wider the service portfolio, the more dependent operators become on content and service partners. But let’s put these general considerations aside for a moment to look at some of the fundamental challenges.

    Significant changes

    moving from a predominant 2G/2.5G – voice and SMS service-offering, to a full-fledged 3G multimedia offering, is in the user equipment; not form factor or radio access technology, but the elementary architecture and capabilities. The user equipment is well underway to becoming pocket computers that can host a wide range of applications. Today we see a rich set of more or less “service provider approved” applications, such as web browsers, media players, e-mail, instant messaging and push-to-talk. This application set is expected to grow further with increased bandwidth in the radio interface, improvement of the general capabilities of the terminals and the introduction of IMS in mobile networks.

    Even if this is in line with handset vendors’ strategies, this could be an operational nightmare for service providers wanting to ensure end-to-end service quality. As the end-to-end service performance of any of these services depends directly on the performance of a) the user terminal application, b) the corresponding network service, and c) the application-specific data exchanged between the first two, validating how different handsets comply with standards is no longer sufficient. It is unlikely that different hardware platforms with different operating systems and client applications perform equally well.

    Active testing and user simulation have proved important tools for end-to-end customer experience validation and handset-service compatibility testing in conjunction with new service launches. However, due to the basic limitations of automated testing, there will inevitably be undetectable problems that arise when the service is used in a live environment. A failed game or music download not only provides a poor customer service experience but also impacts revenue, as incomplete services cannot be charged for.

    It is therefore essential for service providers to have insight into the service performance vis-à-vis different types of handsets. By deploying probe-based service assurance solutions at the Gn, Gp and Gi interfaces in the packet-switched core network, service providers can get a very detailed picture of the service usage and service performance. This is done by decoding specific TCP/IP and WAP protocol layers, such as TCP, WTP (Wireless Transaction Protocol) and WSP (Wireless Session Protocol). With access to details in the signalling protocols, it is possible to analyse the performance of different services and root-cause analysis of failure reasons. As an example, it is possible to determine whether the handset supports the service request, has the appropriate software release or whether the SIM/USIM is properly enabled. It is also possible to verify whether the handset fully complies with the WAP and TCP/IP standards and the upper application layer data is correctly interpreted.

    Important information

    Apart from providing vital operational performance metrics, the metrics collected from the Gn, Gp and Gi interfaces provide important business information about the end-users’ service consumption patterns; Which are the Top 10 requested services? Which handset types generate most MMS traffic? What impact does marketing campaigns have on user behaviour? With the insight into which handsets cause most service problems at one extreme and which generate most service usage at the other extreme, service providers can improve the overall end-user experience whilst reducing operational expenses. Hence, service providers have a major opportunity to improve overall revenue, by steering users towards particular handsets, through a mix of vendors, subvention and marketing promotions.

    Being the demarcation points for the operator’s own network, the Gp and Gi interfaces are also important interfaces as they are the IP service interfaces towards roaming – and content partners. Basic but powerful performance indicators such as Service Access Delay Time, Round Trip Time and Service Access Failure Rate are important components monitoring the performance of a particular content provider against agreed service levels.

    The single most complex link in the UMTS value chain is the UTRAN radio access network. The dynamic allocation of radio resources (i.e. bandwidth) for different users, according to their particular needs, not only puts high performance demands on the user equipment and Radio Network Controllers and base stations, but radio network and capacity planning is vastly more complicated. As the capacity and the size of UTRAN radio cells is dynamic and depends on the number of users within the cell and the type of services they are running, radio network planners need to make assumptions about user behaviours, QoS targets (e.g. delay time, loss rate, etc.), and rules for resource allocation.

    Being the bottleneck in the UMTS value chain, the end-user perceived service quality is directly influenced by the performance of the UTRAN and the resources allocated to a specific user. Hence, if the planning was based on wrong assumption or if – or rather when – the user behaviour and service patterns change, the radio planning needs to be reassessed. To enable this unremitting evaluation and optimisation of the radio access network, planners must have access to reliable UTRAN mobility and accessibility performance information from the Abis and Iub interfaces.

    Further, as the UTRAN network is highly dynamic with continuous resource reallocation when users move in/out of base station coverage, the networks must reconfigure themselves within short time intervals to provide roaming and handoff between calls. By deploying passive monitoring on Iu CS, Iu PS, A and Gb interfaces, it is possible to get a precise picture of how many users cannot access the network, why users are denied access to the service, and  where inbound roamers are gained/lost in the network. It is vital to detect any incorrect configuration or malfunctioning network elements.

    Providing high service quality is often not a problem in itself, but doing it in a cost-efficient way is the real challenge. Service providers that manage to strike the right balance will meet pricing expectations while still delivering decent margins. But which tools are available for wireless network operators wanting to meet these operational challenges? There are a number of established solutions in the market. The most commonly known solutions are the Fault- and Performance Management Systems taking data directly from the network elements forming the network infrastructure. In most cases, the feed is counter-based statistical information and alarm indications (SNMP traps).

    Besides these Fault- and Performance Management tools, there are various types of Active Test solutions that can be both probe-based, i.e. implemented using actual handsets, or software-based using handset profiles for simulation of different user devices. The Active Test solutions rely on automated simulation of users and user traffic, according to predefined test routines. An emerging category of tools is QoS Agents embedded within handsets. Recording the performance metrics of live service usage, the Embedded QoS Agents send the result to the central system for correlation and post-processing. Finally, there is the group of Passive Probe Monitoring Systems that – thanks to their technology-neutral architecture – can be used to capture signalling and performance data across different technology domains making up the network operators’ complete network infrastructure.

    Combined solution

    As no single solution meets all operational needs, network operators often combine two or more solutions to seek synergies in the merits of the different solutions in a combined Service Quality Management solution. This article has discussed some specific challenges that service providers face today, and will face to an even larger extent, as they continue to explore new and sophisticated IP services.

    These challenges are of course just a small subset of all the operational challenges service providers are confronted with, transforming their voice-only business into a multi-string service business. When planning their OSS systems, service providers need to consider both the complete value chain of its operation, and the life-cycle of the individual service offerings; from early deployment testing, in-service performance monitoring, to trouble shooting and optimisation.

    Deploying an SQM solution with an integrated network-service-customer-partner management view will provide a holistic view of the entire operation; a way of ensuring high service quality while making the most of available physical and spectrum resources.