More
    Home Blog Page 1551

    Samsung the hot tip

    0

    More than twice the number of camera phones will be sold in 2003 compared to 2002, according to a report released today by ARC Group has claimed.

    The wireless analyst claimed that by the end of 2003 more than 55 million consumers worldwide will own camera-phone handsets, more than double the 25 million mobile units sold in 2002. 
    “This year we have seen a massive growth in camera-enabled phones, with 15% of handsets worldwide featuring built-in cameras or designated camera accessories,” David McQueen, ARC Group’s Senior Consultant and author of the Future Mobile Handsets 2003-2008 report, said.
    For mature markets, the growth has come from existing mobile phone users as they are encouraged by handset manufacturers and network operators to replace their handsets with more feature-rich models, a turnaround from a few years ago when the emphasis was on the first-time buyer, McQueen found.
    “Tempted by innovative design features such as rotational cameras and swivel screens, along with the advent of multimedia messaging, colour displays and polyphonic ring tones, we’ll see many consumers upgrading their mobile phones this Christmas,” he said.
    The study also predicts that by 2005 130 million handsets with camera capability will be shipped globally, and with the additional boost of 3G roll out, this figure is expected to increase to 210 million by 2008. 
    “Globally, the Asia Pacific region will continue to lead the way, but Europe is expected to improve its market share through the continued take-up of mobile messaging services and with operators promoting attractive services such as Vodafone’s Live! service,” McQueen explained.
    ARC Group predicts the entire mobile handset market to grow by 10.3% with consumers buying 444 million mobiles by the end of 2003, up from 402 million in 2002. This trend is set to continue for the next five years, with handset sales forecast to reach 689 million by 2008.
    The report saw a noticeable change in the market shares of the major handset vendors in 2002, although the top two, Nokia and Motorola, remain the same.
    The most notable rises are Samsung, which has increased its worldwide share to around 12%, and LG, which is doing well in the CDMA market. Siemens also saw its share grow in 2002, although market share for SonyEricsson and Alcatel has slipped. For the first half of 2003, the top four remained unchanged, although LG was hampered by the SARs virus, and SonyEricsson staged a comeback to push up its market share.
    Overall, Europe lost sales last year owing to market being over-reliant on the replacement market, and growth is expected to be slow up to 2008.

    The ARC Group’s view that Samsung is a coming threat were vindicated by a report from VisionGain, which produced a report aggresively titled “The Samsung Report — a threat to Nokia domination?”
    The survey carried out for the report amongst industry executives found that 35% of respondents expect Samsung to gain the most market share in handset market in 2003.
    VisionGain said that Samsung Electronics is currently the third largest global handset manufacturer with a 9.8% share of the overall market in 2002 and an ambition to reach a target market share of 11.6% by the end of 2003. Visiongain believes that Samsung will eclipse Motorola by 2006 — posing a stronger threat to Nokia.
    The report finds that one of the major factors in Samsung’s favour are its openness to a variety of operating systems and extensive interest in both CDMA and GSM.

    Tariff cut brings rewards

    0

    Cosmote, the leading Greek mobile operator, has credited increased traffic for a reported rise in revenues of 13.8% for the first half of 2003.

    Half year revenues were up to EUR 574.6 million and net income up 16.7% to EUR 235.6 million. EBITDA margin stood at 41%, a slight rise on the equivalent 2002 period.
    Although the operator cut tariffs by an average 25% Cosmote carried 2.1 billion minutes on its network during the first half of the year, an increase in traffic volumes of 32% year on year.  This translated to an increase in airtime revenues of 17.9%.
    ARPU for the period was stable compared to 2002, at around EUR 28. This included absorbing the effects of the increased number of pre pay customers and the decrease in tariffs. ARPU from contracted subscribers on the other hand was up 8.4% to EUR 46.4 which the operator attributed to increased usage resulting from its tariff cuts.
    Data contributed 17% by revenue and roaming revenues 2% of total revenues.
    AMC, the group’s Albanian company reported revenues that were largely stable, although EBITDA for the half year was down 17.1% at EUR 28.1 million.

    What now for Bluetooth?

    0

    Bluetooth is a technology with a long and difficult development history that has seen it change identity with all the aplomb of a chameleon. But some claim it has lost its way through the changes? Steve Rogerson sends a health check from this year’s Bluetooth World Congress.

    The Bluetooth industry nervously celebrated the technology’s fifth birthday at the Bluetooth World Congress in Amsterdam in June. Nervously, despite curves showing a steady growth and market watchers predicting more growth; nervously, because those curves are below the levels proponents wanted and nervously because the overall market has yet to accept the technology as a major player.

    On the surface, the figures look good. After a total of 40 million units equipped with Bluetooth shipped in 2001 and 2002, shipments in this year alone are set to hit the 75 to 80 million units mark. Despite this, however, the market acceptance appears poor. Indeed, according to a survey by Frost & Sullivan, Bluetooth has registered little more than a blip on the corporate radar.

    The survey looked at companies across 11 countries in Europe and found 69% had no plans to use Bluetooth, 22% were thinking about it and only 9% were already using it. Compare that with wireless LAN where 42% are already using it and 15% have the technology in their plans for the next 12 to 18 months.
    “The penetration of enterprises is fairly low,” said Michael Wall, an industry analyst at Frost & Sullivan, “but there are signs that it could improve.”

    However, the Bluetooth industry is facing an identity crisis. Whereas it would like to be a corporate player, it knows that its base market so far is in the consumer sector, but it also knows that if all Bluetooth succeeds in being is the technology of choice for connecting headsets to mobile phones, then it will have failed. Perhaps it was this that evoked a note of desperation in some of the speakers. There were even noticeable groans when John Hodgson, chief executive officer at Cambridge Silicon Radio one of the pioneers of Bluetooth, said confidently that the killer application for Bluetooth would be linking hi-fi systems with their speakers.

    Dead-end niches

    No one would deny that this is a possible niche market for the industry and one that should be investigated, but to say that is the killer application that will drive Bluetooth forward is not what the industry wants to hear.

    In fairness, Hodgson was confident about the technolgy’s future in general, announcing that Bluetooth will ship twice as many units as Wi-Fi this year even though it has been around only half as long. The target though he said was 500 million units a year and to reach that he said, “We need to do a lot better as a community.”

    A better base on which the industry can build its future is the automotive market, and on that just about everyone was agreed. New safety regulations implemented and on the horizon look set to outlaw drivers with mobile phones to their ears. There is also a reluctance to promote driving with bits of cable across drivers’ bodies as this potentially poses a safety hazard as well. A handsfree kit utilising a wireless technology seems the obvious answer, and Bluetooth is ideally placed for that.

    Ibrahim Mohamed, senior product development manager for AT&T Wireless Services in the US, explained how momentum was now building stating that, “Any distraction in the car is a distraction. A lot of people are talking about distractions and not just phones.” But, safety he said, was not the only driver (pardon the pun) in the automotive market, pointing to statistics that show 60–70% of all calls from mobiles in the US are from within a car. Users in their cars have no communications alternative and therefore making calling easier and safer could be seen as a revenue generator for mobile operators.

    “We need to make this convenient and high quality for the user,” he said. “Bluetooth makes it easier to make calls and is the only real universal car kit.”

    Others, however, were still looking to the so-called personal area network (PAN) as the saviour for Bluetooth, prompting Alex Hum, a senior research and development manager at Orange France, to say, “We don’t talk about killer applications but a killer environment where what is personal to the user is important.”

    PAN promise

    True, you might say, but he then raised a few eyebrows among those calling for the jargon in the industry to be reduced when he said operators should become “life service providers,” offering a range of devices to consumers. And, he closed a few ears when he took his predictions into the world of science fiction with talk of rings that can monitor temperature, moisture and so on and thus make decisions about a person’s state of mind and trigger calls to them based on that.
    “The ring could trigger a call to a friend saying the person needs cheering up so give them a call,” he said. “This can generate call revenue.”

    Such a suggestion seems to smack of desperation again and is not the type of practical, here and now application that people are looking for.

    Despite such silliness, the idea of the PAN is potentially sound. It refers to a situation where Bluetooth acts a cloud connecting different devices that a person may be carrying. In fact, the new version of the Bluetooth standard — 1.2 due to be implemented in September — includes an upgrade that lets such devices share the bandwidth more intelligently.

    Personal gateway

    Helping this along is personal mobile gateway (PMG), a technology from IXI Mobile that can build applications on top of Bluetooth. “European and Asian operators see PMG as an opportunity to increase ARPU,” said Joyce Putscher, a director at market watcher In-Stat MDR. “US operators are lagging in this, being only at the earliest stages of investigation.”

    The concept behind PMG is that nearly all phones are built as talking devices first and still only a small percentage are built as all-in-one devices but they are the high consumers of data. What PMG does is turn a normal talking phone into a wireless router for other data devices. “The reason people don’t use data services,” explained Hans Reisgies, business development manager at IXI Mobile, “is the user interface on a phone. It is not conducive to using data, so PMG can route the data through Bluetooth to a thin client device such as a text manager. PMG acts as a router and server in a low cost cell phone.”

    The idea is that whereas the cost of an all-singing combined phone, camera and PDA may make some baulk, especially among the younger audience, using PMG lets the user buy the phone first and then add to it as time goes by and new services make data functionality more compelling. Reisgies said he expected three or four operators to launch PMG phones and devices this year, but Putscher believes that operators are only looking at using these devices for initial trials and what happens next will depend on the results of those trials.

    “There probably will be some devices this year and more next year,” she said.
    IXI though, is not keeping all its eggs in the Bluetooth basket and is making sure the technology will work just as well with wireless LANs and Bluetooth rivals such as Zigbee. “Our architecture is not dependent on the technology,” said Reisgies.

    No interference

    The Bluetooth industry itself has taken the first steps in acknowledging that it too must work with other technologies rather than competing against them. This has manifest itself in the alteration of the Bluetooth specification to stop interference problems when Bluetooth is sharing space with, say, a wireless LAN. Version 1.2 has a frequency hopping system that avoids frequencies that other technologies are using. This version also aims to improve connection times and quality of service and closes a security loophole that could let a hacker acquire access codes by scanning for Bluetooth transmissions.

    “The only hacking threat we know about is covered in 1.2’s anonymity mode,” said Mike McCamon, executive director of the Bluetooth SIG. “And even then, Bluetooth is a short range technology, so you have to be close to make an attack. We know of no-one who has broken into a Bluetooth link, and that is not the case with other wireless technologies.”

    Despite such reassurances security worries remain, particularly surrounding Feel technology. This is being developed jointly by Sony and Cambridge Silicon Radio and refers to a situation in which devices close to one another will automatically make the Bluetooth connection and in some cases even transfer files without user interference. For example, a digital camera placed next to a laptop could automatically download its photographs.

    It has been developed for the best of reasons — in response to complaints that Bluetooth is difficult to use — but from the initial response the developers know that if Bluetooth is to be accepted in an already cautious corporate market, then safeguards must be placed on such a development.

    Feel technology maybe a Bluetooth offshoot but it encapsulates the dilemma the technology faces. The industry is worried that Bluetooth is going to be left behind and so is striving for innovation but such innovation can only succeed if it is firmly routed in reality. It must have the maturity to convince potential users that the ideas behind it are solid.

    Bluetooth technology does have a lot of potential and does have markets where it provides a clear advantage, the automotive sector being an obvious example. It is also positive that it is acknowledging by changing its own specifications that working with other wireless technologies is a must. Building on these sensibly is the way forward for Bluetooth, thrashing around desperately looking for new and ever more far-fetched markets is a sure fire route to nowhere. It is clear from the attendees at this year’s centrepiece that both camps are still alive and kicking. Which one will win only time will tell but it is in the industry’s own hands.

    Test realities of 3G rollout

    0

    When the first GSM base stations were installed vehicle-mounted mobile phones routinely failed at cell boundaries. Now, 12 years on, a new technology is being introduced and we are forced to ask, are dropped calls a necessary part of the UMTS learning curve? Achim Grolman of Willtek Communications assesses the problems and how they are being solved.

    Compared to American CDMA wireless standards, UMTS defines far more network elements and interfaces. This has been done deliberately as such granularity gives operators, for each entity, the choice between multiple vendors to buy the best products. However, the evolutionary nature of standards means that vendors may initially follow different interpretations of the UMTS specifications. This means that operators face a complex task to make systems work effectively from the outset and that requires a lot of testing and discussions with different suppliers. This will come as no surprise to those familiar with the development of cellular infrastructure as there are no conformance test cases for the network elements as there are for terminals.

    In the early 1990s, it took two to three years to get GSM networks going. For 3G, the complexity of integrating telecoms and IT network elements has added to the problem experienced then and, in truth, it is surprising how well the first 3G networks are already operating.

    As real network elements are knitted together, problems in the specifications are discovered and resolved, which means specifications are constantly being fine tuned. This process directly impacts on the testing environment as conformance test specifications have to follow these changes; test equipment and test cases then need to be validated. This is why test equipment is often not available before the network elements as test equipment and network elements are developing in parallel in the early stages of a new standard.

    Installing the RAN

    3G network operators have already installed base stations (or Node Bs to use UMTS jargon) and radio network controllers in major cities. Installation in smaller towns and urban areas is also progressing although much more slowly as there is little revenue from the existing 3G networks to pay for further rollout. 3G is being installed under strict financial controls and that means attempting to optimise the design of the network.

    To this end, simulation tools are in place to determine the best locations for the base stations. Yet, to be effective these tools need to be calibrated with real-life tests using constant wave or W-CDMA transmitters, measuring receivers and RF propagation test software. These efforts can minimise problems later, such as the need to acquire additional sites later.

    Operators have learned from less-than-optimal network layouts and configurations in the early GSM phase.  For example, back then dozens of engineers from different vendors circulated London to find out why their phones lost calls along the M25 in spite of strong carrier signals. Add to this the fact that W-CDMA behaves differently to GSM, making in-fill more complicated, and the benefits of taking the time to plan the network as effectively as possible is clear. However, getting it right in theory and practice are not necessarily the same. Therefore, once installed, radio networks are verified and optimised with the help of test mobiles and drive test software, pinpointing coverage gaps or problems due to cell reselections occurring unnecessarily and/or too often.

    Handover techniques

    There may only be a few UMTS phones available on the market – first-tier vendors of mobile phones are introducing their first-generation models now – but to be fair, these phones and the networks they are operating on, already support most of what customers are used to from GSM phones: voice, SMS, MMS, WAP and web over packet data channels. But customer expectations are high and, on a technical level, the handover of a call from cell to cell, now highly reliable in GSM, provides a very specific additional challenge.

    There are, in fact, several handover techniques in UMTS. Adjacent cells using the same carrier frequency perform a soft hand-off. In this both base stations hold a connection to the phone for some time so ideally, there is no disruption in the call. The softer hand-off takes the phone from one antenna sector to another of the same base station. A hard hand-off (which breaks the existing connection before making the new one) however, occurs when the carrier frequency is changed.

    As Europe demands dual mode operation, a special case inter-RAT (Radio Access Technology) hard handover occurs when a call is transferred between UMTS and GSM. It is a new requirement and technically challenging. Indeed, the first successful handover from UMTS to GSM was only achieved by Ericsson less than a year ago and therefore it is a function that has not been supported by early 3G networks.

    The core network provides the link between both radio access technologies. When a UMTS call is due to be handed over to a GSM base station, the core network requests information about the new channel from the GSM base station system. While the handover message is usually sent to the phone via a GSM base station, the BSS now sends it to the core network which in turn encapsulates the normal GSM handover message in a UMTS handover message. This way, the GSM protocol stack in the phone can interpret the data and set frequency, timeslot and other parameters accordingly although the message is transmitted over the UMTS interface.

    Changing times

    It is a complicated process and internal test methods and equipment to analyse software interfaces are proprietary, at least for each first-tier vendor of terminal equipment. Later on, second and third tiers will benefit from third-party tools and software written around generally available GSM/UMTS chipsets.

    In addition to testing hardware and software modules within the phone, measurements of the overall performance are required and vendors have to prove that their products are up to standards. Although UMTS protocol testers are available, the majority of the test cases are not, since they take some time to develop. RF and protocol specification go first, then the conformance tests, and at the end of this chain, test case implementations from test equipment suppliers need to be validated. Note also that the Inter-RAT protocol is still being optimised!
    One of these optimisations concerns adjacent cell measurements. In order to hand over the call to the best suitable base station at the cell boundary, the network needs information from the phone regarding the received signal strength and quality of the surrounding cells. This is easily possible with the TDMA nature of GSM which leaves enough time for the phone to monitor adjacent base stations. However, UMTS-FDD calls are transmitted and received using permanent carriers. Only with the introduction of the compressed mode where the user signal provided at constant bit rate, can it be transmitted at a higher rate for some time, thus leaving a gap which the mobile can use for adjacent cell measurements on different frequencies. An alternative would be dual-receiver designs where the mobile phone can receive on the assigned channel with one receiver and on the neighbouring cell carriers with another. This approach, however, is more costly in terms of components, weight, power consumption and battery life.

    Now that the network knows the best base station to hand over the call to, it provides the phone with information about the new carrier. Depending on the direction in which the call changes from one radio access technology to another, the terminal has 40 to 120ms (without preceding synchronisation) or even up to 220ms (in the unsynchronised case) time to switch technologies, synchronise in frequency and time to the new base station, set up the protocol, and continue the call. One design approach, commonly used in early models, uses separate base band chips for GSM and UMTS which are more difficult to synchronise during handover. However, even the other approach, a combined chip, still poses a challenge in terms of both hardware and software design.

    While data connections are regarded as something of a special case now, they are expected to become more common with 3G. The problem there are a few applications will not survive the cell change order procedure because they time out before the mobile has synchronised and set up the link to the new base station. UMTS equipment will support concurrent voice and data connections. The data part, however, will most likely be lost in a UMTS to GSM handover because phones do not support both connections at the same time in GSM. This problem may be solved with voice over IP over both UMTS and GSM/GPRS.

    As is becoming more obvious as we move through the 3G structure, it is much more complicated. Furthermore, the absence of protocol testers fully supporting these procedures to test both phones and networks, means all parties involved have to find a different method.

    Alternative test route

    This began when manufacturers of phones and network elements announced bilateral cooperation agreements years ago to further their 3G developments, but interoperability tests between virtually all manufacturers are required and are being executed now. This can be time-consuming but should not be seen as a luxury as it will avoid situations such as the one that occurred two years ago when a major handset maker had to withdraw its GPRS phone from the market because it did not work with one network vendor’s equipment. Interoperability tests are especially important for standards which are open to interpretation and/or provide many different options for doing the same. In other words, they are vital for new, emerging technologies.

    When the first stable versions of the UMTS standard came out, people expected ‘UMTS islands in a sea of GSM’ and to many it was natural to believe (or demand) that from day 1, terminals would be capable of handing over voice and data calls between GSM and UMTS. Reality, however, shows us that the handover has never been the first function to work with a new cellular standard, be it 1G, 2G, 2.5G or 3G. Engineers still need time to optimise hardware and software designs, especially in the handsets. There is much pressure from operators to get handovers working because dropped calls are the first things that customers discover to be wrong with the network and are a major obstacle to success. But this does not mean that UMTS doesn’t work — it simply does not fully support this feature – yet. It is only a question of a few months until new GSM/UMTS phones are available, allowing users to seamlessly roam across air interface technology barriers.

    Beyond test drives

    0

    Operators are being driven to reduce costs while at the same time rolling out complex new services to grow average revenue per user (ARPU) — they are stuck between a rock and a hard place. Jeff Atkins of Actix explains how software that uses key data from the radio access network can lead to significantly improved productivity, resulting in lower costs and an improved ability to roll out and optimise new services.

    The once simple job of radio network optimisation, using just drive test equipment, is rapidly becoming obsolete as the challenges of debugging new data services bring new complexity to troubleshooting and optimisation processes. New and more advance equipment is needed to keep pace. Indeed, market analyst firm Frost & Sullivan expects the market for wireless test equipment to increase to $1.91billion by 2009 from $1.33bn last year, as the delivery of wireless services becomes more complex.

    Because the radio link is known to be the weakest link in the wireless provider’s network, it is traditionally the first place to look when service problems with voice networks occurred. However, with complex new technology like 3G, subscriber perceived service problems such as: low data throughput can arise from a variety of causes including unplanned server downtime, internet congestion and core network dimensioning, as well as coverage or interference issues in the radio network. However, before engineers can begin to solve the problem, they must first be able to isolate the portion of the network responsible for the problem.
     
    To achieve this it is necessary to have an “end-to-end” view of network performance and that is only possible by correlating data from a number of different sources — drive test equipment, infrastructure vendors’ proprietary call trace logs, protocol analyser logs from open radio and core network interfaces and IP “sniffer” logs. That’s why it’s more important than ever to be able to get access to these various sources in a single platform.

    To optimise and troubleshoot a 3G network effectively, performance data needs to be collected from a variety of points in the 3G RAN. It is only by utilising data from a combination of sources that a full picture of the performance of the network can be obtained. For example, to find out that an FTP proxy is related to low throughput and to understand how best to correct the problem, TCP IP logs and drive test data need to be correlated on a common platform.

    Information collection points

    The most common sources can be seen in the image above and include: the air-interface (Uu), RNC-Node B interface (lub), RNC-MSC interface (lu CS), RNC-SGSN interface (lu PS), and Performance Counters and Measurement Programs (OMC).
    Each information collection point offers different strengths and weaknesses in areas such as the type of information that can be obtained (e.g. radio link information, circuit call information, or packet data information), the availability of data collection devices like handsets, the granularity of data which effects its ability to be used to solve specific problems, the ease and cost of collection, and the volume of data that can be collected. Wherever that data comes from, once collected, it must be filtered and reduced before it can be used to make decisions on improving network performance. In addition, collecting and analysing various sources of data at the same time allows efficient utilisation of resources.

    Subscriber perspective

    Using equipment available from a variety of vendors, operators can drive around their network measuring performance from the perspective of the subscriber. The equipment needed to do this typically comprises a special test mobile phone and wideband scanner, connected directly to a laptop, or indirectly through an intermediate hardware device. The scanners are used to passively measure desired and interfering RF signals from base stations faster and with better accuracy than test mobiles and therefore they compliment the measurements available from the phone. In many cases, scanners can detect the underlying RF causes of the performance problems detected by test mobiles. Some vendors also offer drive test equipment that can be operated by remote control, allowing equipment to be placed in technicians’ vehicles or fleet vehicles (such as taxi cabs), for automatic data collection.

    In addition, proprietary measurement programs that run on the switch or RNC enable operators to collect performance data for specified mobile phone numbers. The log files are often used to collect uplink performance metrics to complement the downlink performance measured during drive tests. These log files may be synchronised to drive test data or used independently.

    Straight from source

    Using protocol analysers available from a number of vendors, operators can collect performance data directly from key infrastructure interface points including the Iu CS, Iu PS, and Iub interfaces. Because these interfaces are based on open standards, the development of collection equipment and analysis software can be completed during infrastructure development. It is then available for use during the planning and lab/field trial phases prior to system launch. Protocol analysers come next and collect a wide range of data, from performance data on the packet and circuit interfaces, down to RF data as reported by the User Equipment.

    OMC Performance counters are vendor-specific, proprietary statistical counters of key network events at a network element level of resolution (e.g. statistics for a cell). Operators have traditionally relied on performance counters to monitor the high-level performance of their networks, either using collection software provided by the vendor, third party software, or by building in-house systems. 

    Once data has been collected from all sources, it must be processed, analysed and archived. The processing of the data can be challenging for a number of reasons. Firstly, operators typically have a number of vendors for different types of drive test and protocol analyser equipment, each with a unique interface format. Operators also often use measurement programs from different technology networks (e.g. GSM and WCDMA) and/or different infrastructure vendors, each with a unique interface format.

    Data sets collected at different interface points may need to be synchronised so that they can be merged for troubleshooting across network elements. The sets may also be extremely large (many gigabytes), and key information must be filtered and reduced before it can be used to make decisions. Finally, formats are constantly being updated and the technology of the air-interface is constantly changing (e.g. 3G rolling out on the back of 2.5G technology). This means that many engineers have limited training and experience with newer technologies.

    Multiple function support

    To enable operators to use the data effectively, their data analysis platforms must support a range of functionalities. These include supporting interfaces to a variety of vendors of drive test equipment, protocol analysers, and measurement programs and providing support for open interfaces, which can typically be used to collect performance data well in advance of proprietary data sources, like test mobile and peg counter data. Other important functionalities include supporting multiple technologies on one platform simultaneously (e.g. GSM/GPRS and WCDMA), reducing data through binning and standard database type querying and filtering capabilities, and synchronising data collected from different network elements and sources to remove timing discrepancies. Providing interfaces into databases for storing collected data statistics and provide web-enabled reporting interfaces for extracting data is also a key function, as well as supporting the latest technologies and vendor formats, and providing a user interface that allows less experienced engineers to become effective quickly.  Finally, analysis platforms need to be able to embed engineering expertise into software to automate the process of analysing large amounts of data.

    Data collection

    A system can be designed to collect data from all available links from the air-interface through to the switch/SGSN. Data can be collected in discrete log files and processed through a desktop application for manual, on-the-spot analysis. It can be collected from any source and processed and loaded into a database system from which it may be served up through a web browser or other client.

    Mobilkom is one of the first European mobile operators to launch its public UMTS network, having selected Actix’ RVS and ANR solutions to be at the heart of its network optimisation strategy. Testing using Actix’ solutions began in mid-December 2002 and the network went live on the 16th April 2003. Other operators that use Actix’ 3G technology include Swisscom and Hutchinson’s 3G networks in both the UK and Italy.

    These operators are streamlining their process — doing it faster and better — ANR and RVS are being used to embed processes into software, accelerating the rate at which rollout can be accomplished, reducing the impact of not having many engineers that are skilled with 3G technology by embedding expertise into the platform, and offering better quality of service by performing complex analyses of problems not previously possible.

    Ultimately, being able to bring 3G services to market faster and provide an enhanced end user experience will be critical to operators taking advan-tage of the only foreseeable long-term revenue growth opportunity.

    Time to learn

    0

    It seems to me that anyone employed in the mobile industry today needs to be an expert in something and competent in everything and that is a very tall order. The distinctions between job specifications are continually blurring, while the skills required to do any job adequately only ever increase.

    To be most effective, each group of employees needs to understand the demands and constraints of those they deal with. For example, those responsible for negotiating content deals surely must understand the capabilities of the network over which this content must be accessed and the constraints of the mobile environment, as well as having a clear handle on the demand, market price, profit share and rights management issues that are directly associated with the negotiations. From the other side, it is equally important that the network engineers understand the traffic and loadings that are being and will be placed on the networks they are responsible for.

    However, everyone is under huge amounts of pressure to deliver within their own job function that they have, and are given, no time to learn. It’s a vicious circle — no time to learn, leads to more opportunity for communications to break down and the increased likelihood that departments within operators, or indeed vendors developing products, will pull in different directions.

    The mobile communications industry is just beginning to show signs that it has identified the way out of its slump and is following that path if not with gusto, then with a strong determination to succeed. Data usage is increasing (see pages 16-17 for more details) and costs have been taken in hand, never to escape again. The green shoots of new data-driven businesses are beginning to poke their heads above the ground and take the first bleary-eyed look at the world. However, if they are not to stagnate at an early stage, those in the business of making mobile communications work end-to-end have to ensure that their own businesses are equally integrated.

    This is a process that will not just happen. You cannot rely on the undoubted enthusiasm of employees to understand what is going to deliver a unified view — it has to be built into the business process. It is not a mobile or communications-specific requirement, it is a basic business requirement and one that many organisations large and small get wrong.

    Many businesses get by because key employees make it their business to learn about the entire company and not just their own area of responsibility; few make it a real objective. For the mobile industry at this delicate time in its development cycle, failure to forge links amongst the departments whose work impacts directly on others will cause problems in the medium to long term. Delayed service rollouts, untimely launches and worst of all, services which fail to work effectively, will inevitably result. Time is always the most precious of commodities but failing to find the time to build such cohesion into a growing mobile data business will, I believe, lead only to a surfeit of time to contemplate the consequences.

    GPRS ready for take off

    0

    2003 is finally going to be the year of data — and not just SMS. MMS, Java and WAP are all enabling operators to deliver cohesive and exciting service offerings to market and at last, they are doing it. Catherine Haslam reviews some of the progress made to date and the challenges ahead.

    Earlier in the year, Mobile Europe received information about a new report which will remain nameless to protect the innocent. It professed to give details on levels of GPRS usage. However, it boldly stated that MMS-based traffic had been taken out of the figures to give a truer representation of GPRS traffic. What? Where have these people been for the past three years — MMS and other such customer-friendly applications are exactly what GPRS is all about. It’s like saying to get a truer picture of motorway traffic we took domestic car use out of the equation.
    Fortunately, where such attitudes were once the norm, they are now consigned to retirement homes for the technology bewildered. This leaves those who understand the value of the mobile data proposition to launch services based on applications such as MMS, Java and even WAP, although remember to say it quietly so as not to scare those who still bare the scars from WAP’s early days.

    Data usage may not be exploding but it is growing steadily. Vodafone Live! which combines multimedia messaging, WAP information services and game downloads in an easy-to-use, easy-to-understand proposition prompted a collective sigh of relief from the entire industry. It may not be perfect but it was a very definite pointer in the right direction.

    Indeed, figures on GPRS usage to the end of March 2003 alone suggest that GPRS packet data is approaching critical mass. Nearly all new handsets launched this year are GPRS-based, MMS compatible and by the end of the year 75% of terminals will be Java-enabled. MMS handset penetration had reached in the region of 3% by March in the leading markets of Germany, Italy and the UK. It may not sound a huge amount but this has been achieved in a period of six months or less since MMS service launches began last autumn.

    Vodafone Live had 240,000 subscribers in the UK at the end of March, while O2 UK claimed 530,000 MMS and Java customers by the end of Q2 and a 21% growth in GPRS usage in Q1.

    Both TIM and O2 Group claim an average MMS usage of five messages per month, a figure at the top end of their expectations. In some ways it is difficult to split down usage into its basic technologies as they are tied together in service offerings. Will an O2 Active or Vodafone Live customer know whether MMS, Java or WAP is being used to deliver what they are looking for? Probably not, and that in itself is a massive achievement for an operator community whose first priority has been network transmission. Indeed, the growth in WAP usage is substantial despite, or perhaps because, users no longer see the service they are accessing as WAP but rather the latest news, sports, weather etc. For example, O2 in Germany has seen a 165% rise in WAP page impressions, year-on-year to the end of April 2003 with 66% of this travelling over GPRS.

    Undeniably, two major drivers for the growth of GPRS have been the availability of a range of functional, attractive colour terminals and the advances made to support roaming and interoperability. T-Mobile claims the largest selection of GPRS roaming agreements at 35   and the rest of Europe’s major operators are not far behind. Obviously, the major groups began with their own members, not least because it was easier to maintain service continuity but now roaming agreements are being made outside of these protected environments.

    To continue this growth requires the continued development and encouragement of content. One major growth area identified by Lutz Schueler, senior vice president product management and strategy, mmO2 is music. He explained that the music industry was initially uninterested in the kind of pricing levels that mobile operators were suggesting for music downloads. However, things have changed and Schueler states, “Blank CD sales now outstrip those of music CD. This has led to an increased interest in mobile music downloads from the music industry. â‚-2 per song is now appealing to them.”

    O2 has been running a trial in Germany and the UK with 150 users in each country equipped with a Siemens device with integrated music player branded to O2. Due to finish as Mobile Europe went to press, the trial had seen 4000 downloads made with an average of five per user per week.

    This is just one example of the many trials for new services that are being run by operators across Europe. New services are being launched for the consumer and business markets (see page 8 for the latest business service launches) and customers are beginning to find something from these ever-growing portfolios to attract their attention. The aim, of course is to increase revenue and Schueler points to a â‚-0.9 increase in ARPU from data in 2002.

    It is, at last, a more positive view. There
    will be services that fail but that has to be part of the data experience and it is not a sign of an operator’s failure. The key to success will be the speed with which an operator is able to identify the successful and failing services and react to that information. However, should data usage growth really take off as we hit critical mass levels, then the pressure will be felt in the radio network once again.

    Having started off by talking about the need to get the service side right, it may seem strange to return to network issues so quickly. However, the financial pressure on mobile operators means that many are looking to GPRS to deliver far more over a longer period of time than was originally envisaged.

    Laith Sadiq, director of marketing strategy at Motorola GTSS explains, “Two to three years ago, GPRS was just seen as a interim technology — a short stop on the way to 3G. Now, operators are looking to make the most of their 2G networks as 3G won’t be seen in anything more than major cities for many years to come.”

    As has been covered in some detail in the last two issues of Mobile Europe, a great deal is being done to make the Quality of Service (QoS) delivered over GPRS more predictable and controllable. However, GPRS could increasingly be seen as the weakest link in the supply of packet data services.

    Dave McGlade, ceo of O2 UK explains, “we need to find ways to get more out of GPRS; to see how we can push the boundaries. Certain amounts of spectrum are set aside for specific applications and then we manage the capacity after that. We also keep a close relationship with our sales channels so that we can see where the capacity will be needed, as well as planning headroom in the network.”

    McGlade was also clear in the belief that GPRS still has a big role to play. “We have learned from our GPRS experiences. Being first is not always the best thing. It’s important to watch the bleeding edge but O2 is no longer there…For UMTS to really fly you really want enough voice and data demand to justify it. I don’t see that for a long time. It’s not the time to move to 3G. We need to get the most out of GPRS and move when we’re ready.”

    McGlade is hardly preaching to the unconverted as many European operators have similar attitudes. Indeed some, such as TIM, have taken a renewed interest in EDGE. This takes the GSM base infrastructure to its limit in terms of data throughput speeds but does require a considerable investment in equipment upgrades. However, according to Sadiq, a better answer to the speed and capacity issues currently laying dormant in Europe’s GPRS networks, lies with coding schemes 3 and 4 of the GPRS specification.

    Results from Motorola’s field trials of CS3/4 demonstrate that throughput when the overhead has been taken into consideration rises from 36kbit/s with CS1/2 to 64kbit/s with CS3/4. According to Sadiq, “64kbit/s is a key speed. With this type of performance GPRS can handle data requirements for the next few years…These coding schemes can help remove bottlenecks and push out the UMTS cross over point.”

    these more advanced coding schemes certainly do deliver faster speeds. However, it is not necessarily the answer to every operator’s needs.

    Those operators, of which there are quite a few, who have infrastructure from vendors other than Motorola face a more difficult and expensive task. Dave Williams, mmO2’s cto explains his company’s position by stating, “We did a paper study on CS3/4 but found it’s too expensive to deploy as Motorola equipment is now only a small part of our networks. In the UK, the majority is Nokia and in Ireland and Germany, it’s Nortel.”

    While O2 has set its sights elsewhere, MobilKom Austria is one of six commercial networks in the Europe, Middle East and Africa region that have chosen to deploy CS3/4 and a further eight or nine are trialling the system. The interesting point about Mobilkom is that it was one of the first to deploy UMTS in Europe and yet it has still seen fit improve the capabilities of GPRS, demonstrating once again that the UMTS island concept is fast becoming the only real option for 3G deployment.

    GPRS may have been introduced as a network technology in 1998 by O2 (then BT Cellnet) but it is only from the second half of 2002 that the networks were solid and handsets, applications and content came together in such a way as to drive up usage figures. It took a long time for this to happen but it could be a relatively short period before those same networks begin to struggle for speed and capacity. More performance has to be gleaned from the GPRS networks and whether it’s through advanced OSS features delivering guaranteed QoS, or basic technology enhancements such as EDGE and GPRS CS3/4, operators are starting to build plans now.

    Voice still the killer app for enterprise

    0

    All the hype in the mobile enterprise market today surrounds data but, according to Lars Svensson, this is only half right as the biggest market with the best potential remains voice. Indeed, he suggested that mobile operators could “double the rate of mobile penetration into enterprises overnight” by installing Ericsson’s voice software.

    Svensson was talking about voice services with a difference — integrated voice communications for the enterprise market in which all the services available on the fixed PBX are extended to mobile terminals. This is provided via Ericsson’s server-based software which can be added to any existing digital PBX, irrespective of vendor. Not only will this provide standard PBX functions such as call forwarding, group pick-up etc, but also allow the mobile device to be integrated with more advanced functions such as calendar, contacts and the enterprise’s internal directory. In the future, there is also the possibility to add such capabilities as listening to email, text-to-voice and voice-to-text.
    For the enterprise, the benefit comes in the form of greater control over mobile costs for the IT manager, as well as operational efficiencies that can be achieved by established truly unified communications and messaging systems.
    Svensson believes that the growth potential for voice in the enterprise market is compelling for mobile operators. He illustrated this by citing Telia’s experiences in Sweden where traffic has increased dramatically and, most significantly, the length of enterprise calls is noted to be six or seven times longer than average. He explained the scale of the opportunity stating, “If Telia can mobilise 10% of PBXs it would create as much traffic as currently exists on the network.” This, he suggested, was typical of the northern European market and claimed that the market was “exploding in Germany and Sweden” and Ericsson’s next target would be the UK, starting now.
    In terms of the investment required by the enterprise to make this work, it depends whether the solution is provided as a managed service by the operator or direct to the customer. When brought as a service the cost can vary from UKP10–100. However, Svensson claimed, “Enterprises shouldn’t pay more over all.” He pointed to Ericsson’s own experience as proof stating, “Our experience was that overall costs fell by 22%.” This resulted for a reduction in fixed call backs etc and a better deal on mobile traffic rates.
    The only requirement from the enterprise is for all handsets to be configured properly. This is something that can be aided by such products as IBM’s device management software. On the operator side, the major consideration is capacity. “It is possible that the increase in mobile traffic this delivers is too much and causes capacity issues. To avoid such problems, both Telia and Vodafone have taken deployment steadily in Sweden,” Svensson concluded.

    Motorola extends TETRA terminal portfolio with MTH650

    0

    Motorola has officially launched the MTH650, its new TETRA two-way radio for the emergency services market, following extensive research with public safety users.

    The radio offers greater ease of use and includes features such as a screensaver, programmable keys and a multi-functional rotary knob. The radio, which was unveiled privately to customers two months ago, has already been sold to eight public safety forces.
    The product is available globally to all markets that use TETRA and has been designed as “an addition to, not a replacement for any existing product in Motorola’s TETRA portfolio,” according to Joanne Moore, director of TETRA terminals.
    Extensive research was conducted over the last nine months with more than 100 users and research teams.  This identified flexibility, safety, ruggedness, comfort and ease of use as the most important features in a two-way radio.
    The ability to personalise the radio is a key feature for the future of radio communications in the emergency sector as the increased functionality means that personnel use the same radio in different way.
    To meet this need, the MTH650 includes a top-mounted multi-purpose rotary knob that changes functionality. Officers can use this to switch between functions such as talk groups or use it to scroll up and down the menu. In addition, the radio boasts: programmable keys for one touch access to popular functions; multiple antennae options including a small helical antenna and whipical antenna for increased coverage in fringe areas and dual mic, top and bottom, to ensure optimal audio quality whether the user is speaking directly into the radio or speaking into the top of a radio fixed to a lapel.
    It is the only radio to have a screensaver, which provides increased security by preventing private information from being on display, especially when the radio is mounted on the lapel of a police officer.
    The screensaver can also be customised to show the organisation’s logo or the user’s identification number, or even used to promote services such as ‘crimestoppers.’
     A “hot mic” function gives handsfree operation in an emergency and this, combined with increased microphone sensitivity, means that a user can stay in contact even if the  radio is dropped.
    A Personal Identification Number (PIN) option and Lithium ION batteries add to the package, while the whole screen, including the navigation keys, can be flipped up-side-down so if the radio is lapel mounted it can be read with ease. A vibrate alert and flat bottom which enables the radio to stand upright on a flat surface, complete the package. There is also range of accessories available which include all the usual items associated with the public safety market.
    Moore concluded, “We have already had positive feedback regarding the size and weight, and the screen saver option has proved highly popular.”

    Team Simoco takes SI route out of SDS ashes

    0

    Team Simoco, the company born out of the ashes of Simoco Digital Systems’ TETRA and Simoco’s traditional analogue PMR businesses, has put itself back on a solid business footing, according Team Simoco’s general manager, Kevin Paul.

    “The aim has been to drive the business back to sanity…It’s not about technology, it’s about business,” said Paul. “We have achieved 12 months of solid trading and have double the budget on the order books for 2003 and have demonstrated to Telecom that this has been a good acquisition for them.
    This has been achieved by concentrating on the needs of customers and creating a business that is heading more towards systems integration than a pure systems provider.
    Having built the initial success from the analogue PMR business which Paul had been involved in running prior to SDS’ collapse, Team Simoco is now looking at ways to add the digital capabilities of TETRA to the company’s customer offering. However, Paul was keen to point out that Team Simoco would not move back to the technology-based business of SDS. “We will look to add the digital element but without the risk.” The risk has been reduced because Team Simoco is not blazing the TETRA technology trail.
    Instead, Paul stated, “This is nowhere near the cutting edge. The plan is to move towards being a systems integrator that doesn’t focus on any single technology.
    “SDS in its later years was a fantastic free consultant. There was huge commercial benefit to what was being given away free. We are now using that to add value to our offering in the form of the skills and applications that we can offer customers to make their businesses more effective.”

    - Advertisement -