More
    Home Blog Page 1542

    That’s entertainment!

    0

    With a whole raft of television programmes now offering an interactive voting element to the mix, the potential revenue from messaging is enormous. There are, however, challenges to overcome says Paul Harvey,  global market development manager, product messaging at Logica CMG Wireless Networks.

    SMS voting has, within a very short time, become the ‘norm’ in the world of reality TV with series’ including Big Brother, Blind Date, I’m A Celebrity Get Me Out Of Here and Pop Idol all cashing in on the lucrative revenue-generating application.

    And it’s easy to see why.  Last year, Endemol’s Big Brother 3 in the UK received more than 13 million text votes, generating mobile operator O2 approximately £1.3m in revenue.  Interactive TV (iTV) veteran, MTV, introduced Videoclash, a SMS-led music programme, in 2001 which now receives over 40,000 SMS messages every hour from viewers registering play list votes and to post greetings for the real-time message board.

    Analysts agree that text messaging will play a major role in fuelling the growth of iTV in the coming years, helping it increase 18 fold in Europe to top £12.3 billion by 2007. 

    But according to industry views raised at the Mass Media Messaging Seminar, at The British Academy of Film & Television Arts, there are still barriers to SMS voting, MMS and new 3G services reaching their full potential in the entertainment industry.  Although the market offers plenty of opportunities spanning TV, music, sports and gaming — as well as drawing a diverse age and social group customer base — there are significant challenges facing providers looking to cash in on this new revenue stream.
    As Steve Van Zanen, VP market development for messaging at LogicaCMG Wireless Networks points out, “The mobile phone can be a lot of things to a lot of people — a lifestyle device, a location tool or a personal management tool.” But making the mobile phone work as an entertainment device requires the support of handset and software developers to remove product barriers such as compatibility between devices, display quality and so on, as well as support from carriers to buy into the business model on an integrated, pan-European scale.

    Furthermore, it requires a close co-operation between network operators, the entertainment channels and content developers to surmount issues of revenue sharing, billing and interoperability.

    In terms of impact on take-up of interactive messaging, however, quality of service and revenue assurance are clearly emerging as two of the most prominent concerns facing companies involved in this industry.

    Assuring that all votes are polled is essential — as seen with last year’s Pop Idol (UK) vote when BT worked hard to avoid network meltdown from the 8.7 million premium rate fixed line phone votes that flooded in for Will and Gareth. If votes do not get through, media and public outcry over alleged poll rigging could ensue. The negative impact on the reputations of both the broadcast companies and the network operators — fixed and mobile — of this kind of situation can be both significant and enduring, potentially affecting similar polls in the future and fuelling discussion in the media, school yards and pubs across the country.

    If problems can emerge for relatively simple voting mechanisms like fixed line voting, then consider the complexities that could arise with SMS voting.

    Tony Riley, co-director of Mobile Enterprise magazine, is sceptical that all of the votes cast via SMS for reality TV programmes are actually successfully counted in the final result.  “If you consider how sometimes texts can take hours to arrive when they are sent from a friend, in peak times, how can operators be sure all the SMS votes have arrived if the result is announced just a few hours later?”

    Although traffic peaks can occasionally cause problems, it is fair to say that most UK operators now have robust telecoms-class messaging solutions in place to ensure that overloads do not hamper service quality.  The log-jam on SMS networks at the final whistle of this month’s Rugby World Cup final illustrates that there are occasions that potentially challenge service capacities, but these days this is more the exception than the rule.

    Lack of understanding

    However, issues continue to emerge as a result of a lack of understanding about the applications that run at the edge of the SMS network, how these can effect the reliability of the system and, subsequently, the effect on quality of service that the customer receives.

    When a viewer sends a vote they need to enter the short number as well as type the name of the contestant that is voted for e.g. VOTE GARETH. As a result, applications need to receive the vote, acknowledge who was voted for, tally the votes and then communicate back to the Short Message Service Centre (SMSC) that the correct vote has been counted. The SMSC then needs to contact the voter within an acceptable time span to advise that the correct vote has been received successfully.

    Understandably, if the networks and applications fail to deliver this integrated, end-to-end process, there is   substantial room for error, poor quality of service and, ultimately, many dissatisfied voters. Fortunately there are solutions which provide capacity for data flow growth, reduced total cost of ownership and, as a result, a greater average revenue per user (ARPU) for network operators. All in addition to building loyalty amongst happy voters who come back time and time again.

    Clearly, the relationships currently exisiting between network operators, TV companies and content providers need to be developed before a truly seamless, customer-orientated service can be delivered.  As Daren Siddall, emerging platforms and devices media analyst at Gartner, points out, “…SMS voting, video clips and other technologies are gaining greater customer acceptance. However, we need to be careful and ensure that future technologies are easy to use for the consumer.”

    By building relationships and sharing the wealth of data that each party is able to mine on users, providers can move towards ensuring that messaging services strike target markets and that content is provided in a format that will be adopted. Understanding the market, the type and format of entertainment that each audience sector is looking for and what makes customers tick is a key to unlocking the potential revenues of mass messaging for this industry.

    John Delaney, principal analyst, Ovum, observed that “TV was originally seen as a passive media, something that you sit back and watch…[iTV works] because people like to get involved.  iTV and SMS TV provides a way for people to get passionate, influence decisions or at least discuss things with a community.”

    Another issue

    As TV firms and production companies take a cut of the return that network operators generate from TV voting, ring tones and other mCommerce, revenue assurance is another issue for the likes of MTV, Endemol and Flytxt. During the seminar, participants said they still had some reservations about SMS voting due to the potential revenue loss that can emerge from prepay mobile phones used predominately amongst the youth audience.

    Many prepay users have realised that they are billed on return of message, so if the inbox is full, the phone is switched off or the credit limit over run then the customer will not be charged. A proportion of zero is nothing, so this is not an attractive proposition to TV firms, especially with higher cost items like ring tones.

    According to Alan Coad, senior VP EMEA & South America, LogicaCMG Wireless Networks, this is an unnecessary leakage of revenue.  Technology to combat this fraudulent behaviour is widely available from companies that have introduced revenue assurance solutions to close these mobile operator billing loopholes. This removes one of the obstacles that are preventing messaging from reaching its full potential in the entertainment industry.

    Clarity of charges

    Other billing issues continue to hamper the mobile messaging industry with clarity of charges across domestic, as well as pan-European networks, required to stimulate mass adoption. Industry commentators often say that mobile operators take too much of a profit cut, forcing up the end charge to users, whilst content providers highlight issues that exist within the industry and call for distribution of revenues generated by SMS voting to be accelerated.

    Finally, with the growth of MMS and 3G, the issue of digital rights management (DRM) needs to be addressed before it can become a compelling offering for both consumers and content providers.  Matthew Kershaw, head of Interactive at MTV, summarised, “The DRM technology isn’t there, which is a hindrance to the usage of MMS and 3G technologies by TV and content firms.”

    There is great brand awareness to be achieved if a TV or music firm’s video-clip is texted to hundreds of people but there is a clear need for some form of forward-locking technology to encrypt messages that are sent on.  Without a DRM solution to protect the content, little revenue can be recouped from the activity and content providers will find themselves in the same situation music firms did with Napster.

    In summary, mass media messaging and mCommerce have the potential to become lucrative areas for the entertainment industry and operators alike. However, operators, TV producers and media channel owners need to forge closer relationships to ensure that the systems truly work. Without greater arbitration, the entertainment industry will be slow in its adoption of mass media messaging and all those involved will miss out on the significant branding and revenue opportunities available.

    Flexibility is the key

    0

    With the death of the traditional, monolithic SMSC looking likely, operators will need to react quickly in order to offer new services, says Patrick Flynn, sales and marketing director for OpenMIND Networks

    SMS messaging is profitable for mobile operators and can contribute up to 15% of the total operator revenue.  SMS Application-to-Person traffic is growing, but the vast majority of SMS is still Person-to-Person traffic — up to 90% of SMS revenue falls into this category and it will continue to be an important part of mobile operators’ P&L for years to come. Traditionally there is very little value added by the operator to this important and key business mainly because, technically, SMS traffic goes through a closed and monolithic SMSC. If we could “open up” the SMSC and make it distributed, we can more easily introduce new services to Person-to-Person SMS messaging.

    Pressures for change

    The business as we know it will undergo a change in the next few years due to the pressures of price competition, increasing churn and the increasing age of the users.  These market pressures for change are already apparent to operators:

    l Competitive pressure has driven down the average revenue per SMS over the last few years from about
    â‚-0.12 to about â‚-0.08, depending on the market.
    l Churn is increasing, mainly due to young customers who are notoriously fickle and are not concerned by changing their mobile number in order to take advantage of competing deals from other mobile operators. The regulatory-driven development of Mobile Number Portability will further increase churn, but probably not as much as expected.
    l The aging profile of SMS users gives an opportunity to develop new lifestyle services for older (30 plus) SMS savvy customers who are well aware of SMS but are less inclined to chat, as they become more drawn into a family driven lifestyle. However, it begs the question: can we not develop more family-centred services?

    These pressures bring a new level of competition and change to this profitable business segment.  Operators will need to react by anticipating the change and bringing new and exciting services to their customers. Infrastructure providers will no longer be able to supply the traditional SMSC product as this market segment is already in decline.
    Specifically, what can we do to provide new and innovative services to the Person-to-Person segment?

    Mobile operators need the ability to generate new services for Person-to-Person SMS services. This is not easily done, technically because of the standards driven and monolithic development of the SMSC. 

    SMS infrastructure providers would service the mobile operator market better by opening up the SMSC or offering a “distributed” open SMSC or SMS-D. This would allow the easy and quick development of new Person-to-Person SMS services at a lower cost than traditionally expected in the mobile messaging industry. If the industry, overall, takes a longer-term view of what it takes to maintain customer interest in SMS services, then both mobile operators and SMS infrastructure providers will benefit in time. Simply put, if we don’t react to the importance of Person-to-Person SMS messaging, we will witness, and we are witnessing, the decline of the traditional SMSC market.

    In particular, the development of a PARLAY service development approach (that of open technology-independent application programming interfaces) to messaging (both SMS and MMS) would be timely and opportune.  A PARLAY approach to messaging will allow mobile operators to stay aheadof the game by having a consistent way to bring services to market.

    Indeed, mobile operators need to ensure that they maintain the “value added” within their networks or lose out to the handset vendors who are adding value at a much faster rate to their products. In particular, standards such as Session Initiation Protocol (SIP) might drive the commoditisation of the networks to pure transport systems with the intelligence at the edge of the network. This is a concern that faces many mobile operators.  Clever development of the mobile networks will counteract these concerns.

    Innovative services

    So, where’s the beef? What sort of innovative new Person-to-Person services can be provided?

    Some examples of these new services are:

    SMS Person-to-Person Sponsored Messaging

    lue Chip FMCG companies, such as Coca Cola and Nestle, are very interested in targeting segments of the mobile operators’ customer base. For example, suppose customers sign up for a Coca Cola summer promotion, which gives them 50 free SMS messages. Instead of giving them credit for 50 SMS messages in a lump sum, it would be much more effective if the customer experienced the campaign as follows:

    l  Every tenth message sent by the customer to their friends is delivered for free.
    l Their friends see the message arrive in the normal manner and are not aware that it is sponsored.
    l  When the customer sends this tenth SMS message they receive a message back from the Mobile Operator that says “Your last message was sent free — sponsored by Coca Cola”.
    l All the billing issues are automatically taken care of by the PEPS system for pre-paid or post-paid customers.

    The advantage to the sponsor is that the benefit — free messaging — is closely associated with the use of the service. Also the benefit is spread out, in this case over a number of weeks or months and is re-enforced by usage.

    Other campaigns or combinations are obviously also possible. The mobile operator themselves could sponsor a “free message day” over the Christmas period for example.  In this case the first (and only the first) message sent on “free message day” will result in an SMS back from the network saying “All SMS messages free today — sponsored by your mobile operator”.

    Other combinations and sponsorship strategies are possible and, obviously, the sponsorship of MMS Person-to-Person messaging is a natural development of this service.

    SMS Loyalty and Lifestyle Services (LALS)

    Another example of new Person-to-Person services involves customer segmentation and associated loyalty benefits. Most operators separate their base into lifestyle segments such as:

    l  “Teenage social user”
    l  “Business user”
    l  “Sports orientated user”
    l   and many other segments…

    Each of these segments is normally marketed to in a different manner and they, themselves, have different expectations of what they want from their mobile service. It would be a very powerful marketing tool if we could identify, in real time, from the pattern of messaging usage (SMS and MMS) what type of customer they are — in other words, which lifestyle segment they belong to. We can then automatically market to these customers by SMS or MMS, suggesting particular services that will appeal to that type of customer.

    Furthermore we can reward loyalty among these customers by, for example, providing them with bonus messages or credit when they send 50 messages in one month. This happens in real time with an SMS from the mobile operator saying: “Congratulations, you have sent 50 messages; your next five messages are free. This loyalty reward is provided by your Mobile Operator”.

    This is a much more powerful customer experience than getting a loyalty bonus on their monthly bill or a credit in their pre-paid balance. Again, the benefit is closely associated with the action.

    Increased ROI with reduced Capex and Opex

    reduced Capex and Opex
    Whereas two years ago service providers were looking for return on investment (ROI) in their operational systems within 12-18 months, today the typical expected payback time is around 9-12 months. In the late 1990s we saw capital expenditure rise from an historic norm of around 15% of revenue p.a. to a record 35% as operators convinced themselves that growth was unlimited and they needed to expand fixed and mobile capacity to cater for that growth.

    Today, operators are being forced to examine ways in which they can protect and grow existing revenue in a much more challenging marketplace.  Flexibility, agility and simplicity are critical. The ability to add value in-flight to Person-to-Person services enables operators to tap into a previously unexploited revenue stream by sweating existing capital assets. 

    By introducing new Person-to-Person services it is possible to unblock business intelligence — to unblock and access vital customer information buried in today’s fragmented systems.

    Unless operators choose a platform that is flexible and open to expansion they are in jeopardy of missing out on market opportunities. There is no doubt that operators must provide varied value-added services to their subscribers or risk continuous erosion of loyalty, loss of new customer recruitment and diminished ARPU. 

    Conclusion

    There are many other SMS and MMS Person-to-Person services that mobile operators might want to introduce into their own markets. Mobile operators have a much deeper understanding of what will or will not work with their own customers based on the cultural, economic and social forces in their market. Up until now services such as PEPS and LALS were difficult services to implement with traditional networks but new technology today enables such services to be launched.

    The most important service that infrastructure providers need to supply operators is a paradigm shift in the service development environment. We need to be able to provide these flexible and innovative new services in a fast and value-added manner with an ultimate aim of maximising the lifetime value of mobile subscribers.

    Devil in the detail

    0

    The challenge is on for UMTS handset manufacturers to keep the IC count down, as Howard Curtis, vice president global services at Portelligent, explains

    Analysts have paid considerable attention to the financial commitments that carriers made in spectrum auctions, and to the costs of introducing UMTS-capable base stations and networks, in forecasting the near- and middle-term prospects of 3G technology. Two further key components of the overall business equation that will determine when 3G becomes economically viable are the cost and complexity of UMTS handsets.

    Recent product teardown analyses conducted by Portelligent on three UMTS handsets designed for the European market — the NEC e-606, Motorola A830, and Nokia 6650 — reveal extremely high average values on several system-level metrics that are good indicators of relative complexity, and also good predictors of overall manufacturing cost. For example, on the dimensions of IC count, total silicon die area, and total component count, the UMTS averages are:

    IC Count: 68 devices
    Silicon Die Area: 9.86 cm2
    Total Component Count: 995 devices

    Comparable values for a representative EDGE-capable 2.5G handset of 2003 vintage are dramatically lower:

    IC Count: 13 devices
    Silicon Die Area: 1.93 cm2
    Total Component Count: 381 devices

    When estimated product manufacturing costs are calculated according to Portelligent’s cost models, the average total manufacturing cost of the UMTS handsets exceeds that of the EDGE example by more than 3x, while the average cost of IC devices in the 3G products is a whopping 5.3 times the EDGE case.

    UMTS versus FOMA

    A comparison of the UMTS handsets along these system metrics with the first generation of W-CDMA “FOMA” handsets that were introduced in Japan in 2001-2002 is also instructive. The first three FOMA handsets analyzed by Portelligent had an average IC count of 32, silicon die area of 10.45 cm2 (with a range from 7.07 to 14.64 cm2), and total component count of 727.

    It is worth noting that, in contrast to UMTS phones, which must implement both the W-CDMA and GSM protocols, the first generation of 3G handsets that NTT DoCoMo introduced in the Japanese market supported only W-CDMA (and the inability to communicate with pervasive 2G infrastructure hurt FOMA badly in the marketplace for the first year after its introduction).

    The average silicon die area is lower in the first-generation UMTS phones than it was in the early FOMA handsets in Japan, while both IC count and total component count are significantly higher. This can be attributed to achievements in feature-size shrinkage in the semiconductor industry over the intervening two years.

    Range in UMTS design

    Although an analysis of the average level of complexity and average manufacturing costs of the three UMTS handsets paints a grim picture for the near-term business prospects of 3G networks, these averages do hide considerable variation.

    On many of the cost and complexity metrics we assess, the Nokia 6650 falls far below the UMTS average as outlined above. IC count, for example, is 29, versus an average value of 68 (and by comparison with a whopping 108 ICs in the NEC e-606). Silicon die area measures 4.54 cm2 in the Nokia 6650, versus a UMTS average of 9.86 cm2.

    How did Nokia achieve this efficiency in designing the 6650? The Nokia handset shows much higher levels of IC integration that are specifically designed to meet the requirements of UMTS communications than either the NEC e606 or the Motorola A830, as well as an overall system design that is more efficient and better integrates the GSM and W-CDMA requirements.

    For example, the TI digital signal processor employed in the 6650 handles the baseband processing requirements for both protocols, as well as integrating an ARM core for applications support (these functions remain separate in the NEC handset). An analog ASIC produced by ST Micro handles the analog baseband requirements for both protocols, as well as providing multiple support functions including system power management, audio I/O, and the SIM card interface.

    Challenge for the future

    While the data transfer rates and potential feature sets offered by UMTS technology may exceed what EDGE can offer, Portelligent’s analysis of these three early UMTS phones indicates that the price is still steep in increased complexity and higher cost. The challenge to designers of 3G UMTS handsets is to achieve products that provide the consumer with the benefits of full 3G communications, but at substantially reduced levels of system complexity and cost. It will be very interesting to see how much progress the second generation of UMTS handsets reveal on these dimensions.

    A process of power

    0

    Tony Dennis looks at the impact of chipsets on handset supplies in Europe, and examines what kind of influence they might have on future handset development

    With the increasingly more complex nature of today’s mobile handsets — especially those being sold in Europe — the supply of chips and chipsets is playing an ever-more important role for handset vendors and their network operator customers. In many cases, poor consumer acceptance of specific handsets — due to low battery life and heavy, bulky casings — can frequently be blamed on the chipsets they are utilising. Furthermore, delays in shipping certain smartphone and high-end feature phones have been blamed directly on component chip shortages.

    A classic instance of chip shortages occurred back in December 2003 when Tom Lynch, president for  Motorola’s PCS division was forced to admit that:  “[Shipping] unit volumes are lower than our customers are asking for due to supply constraints for the integrated-camera components.” His colleague, Bob Perez, general manager for supply chain operations with Motorola PCS, claimed his company was not alone in its suffering.  “The supply constraint for integrated-camera components is an industry-wide problem,” Perez maintained. “Motorola is feeling a larger impact because of the extremely limited supply of the smaller camera technology that we use for our handsets, which enables a compelling clamshell form factor and stylish design.” 

    Some observers casts doubt on whether the problem was genuinely industry-wide, though. Paolo Pescatore, a senior research analyst for EMEA wireless with IDC, claimed that it was, in fact, quite rare for handset manufacturers to be hit by chip shortages.  “It’s more a question of [the handset vendor] underestimating its own resources and capabilities plus poor management,” Pescatore told Mobile Europe. “Vendors know when there is going to be a blip in user demand — around Christmas — and they can easily plan ahead for it.”

    The general consensus, however, is that a shortage of camera lenses and associated digital optical components is an unusual experience for the handset industry in general. By contrast, low battery life and larger physical dimensions for handsets are blamed directly on component choice and this applies in particular to 3G handsets.

    The solution, according to Michael Civiello, vp for marketing and business development with specialist chip supplier, Zyray Wireless, is to employ a separate, dedicated 3G chip. Civiello estimates that in 2004 there will be a market for around 10-13 million W-CDMA handsets in Europe and that 95 per cent of the problems with 3G will have been ironed out. “Motorola and NEC’s 3G software and hardware was set in stone something like two and a half years ago,” Civiello claimed. “We’ve learnt from their pains and experiences. Our 3G chips are up-to-the minute and include all the latest advances. This enables handset vendors to snap on a reliable, low cost 3G solution to an existing design.” Describing Zyray’s chips as “Velcro-like”, Civiello was forced to admit that presently its products only bolt onto Infineon based handsets but claimed that still meant he was targeting 15 per cent of the total handset market.

    Overwhelmed with demand

    An area where Zyray may well enjoy an advantage concerns EDGE [Enhanced Data rates for Global Evolution]. He claims that a quick straw poll of Zyray’s potential clients showed that they are currently overwhelmed with demand for a handset which supports both W-CDMA and EDGE. “I’ve yet to see a handset shipping which offers both UMTS and EDGE. However, I fully expect to show a reference design for such a handset at 3GSM 2004 which is aimed at potential customers in both Korea and Europe,” Civiello commented.

    Despite EDGE’s original public profile as purely an American technology, to date eight European network operators have announced their intention to offer EDGE over their networks including Bouygues Telecom in France. A strong appeal for operators is the possibility of a tri-mode handset which can drop down from 3G to EDGE’s enhanced data speeds rather than falling immediately back to GPRS.

    Another very vocal advocate of EDGE is handset reference supplier, TTPCom. The company has established strong relationships over EDGE technology with Intel and ADI for baseband chips and with Renesas for the necessary radio chips. The company argues that the real costs for operators lie not with infrastructure roll out but with proving that such handsets work over their own networks. For that reason TTPCom has been co-operating closely with Rohde & Schwarz and has successfully completed 70 test cases already. Consequently TTPCom confidently predicts that by mid 2004, handsets utilising its EDGE designs and technology will be shipping.

    Many of TTPCom’s customers are Asian based manufacturers and TTPCom argues that this is one clear case where technology provides a market advantage. Presently there is only one EDGE handset shipping — Nokia’s 6200 — and no sign that the other Tier One handset vendors are ready to compete. So, just as NEC has exploited the fact that out of the top vendors only Motorola currently offers a competitive 3G handset, so other Asian manufacturers will seize the opportunity offered by EDGE, TTPCom believes.

    Mainly thanks to its long established relationship with Nokia, Texas Instruments and its OMAP chips in particular have long dominated the handset market. Sales figures for 2003 have been very encouraging. According to the latest report from Forward Concepts — which monitors sales of DSP chips in particular: “After reviewing the Q4 market guidance of market leader, Texas Instruments, and increased projections of cellular phone shipments by the financial community, we have increased our earlier 15 per cent (DSP chip) shipment growth to 20 per cent for 2003. Also, after earlier cutting back our 2004 forecast, we are also raising it from 20 per cent to 25 per cent. [That’s impressive] when you compare this with the 2.3 per cent increase of the [total chip] market over the same period – DSP is a clear winner.”

    However, efforts by the world’s leading chipset vendor, Intel, to crack this market are still failing. “While many of the leading handheld vendors (H-P, Dell, Toshiba, etc) use Intel processors, the company has not yet made any impact on the smartphone segment, ” said Mike Welch, an analyst with  market watchers, Canalys. “The majority of even the small percentage of smartphones that are running a Microsoft OS [Operating System] are currently using TI chips.”

    Nevertheless, as the importance of high-end feature phones and smartphones grow, so does the importance of processing power. “We’ve already seen the effects that the demand for more processing power can have in handsets. We’ve seen handsets that crash as a result. The demand for greater processing power is only going to get stronger,” argued Paolo Pescatore. “Handsets now need MMS capabilities; they need built-in cameras; they need to support Java and to run games. That’s obviously going to create a demand for faster processors and handsets that can truly multi-task  — like PCs can. Users are impatient and they aren’t going to want a handset which forces you to wait before you can make a call.”

    Which is perhaps why Intel thinks it might have an edge with its forthcoming chip, code-named Bulverde. It’s capable of running a game originally written for the Microsoft X-box; plus it features ‘QuickCapture’ technology enables the user to take 4 Megapixel photos and capture video at 40 frames per second. Additionally the Bulverde will offer ‘Wireless Speedstep’ technology which dynamically adjusts the power and performance of the processor depending on the applications running. Intel expects to deliver the Bulverde chip to customers this year (2004).

    Another major challenge to Texas Instruments is expected to come from StarCore, LLC, a new company focused on DSP technologies formed out of the StarCore joint venture between Agere Systems and Motorola and joined by Infineon Technologies. According to Thomas Lantzsch, CEO with StarCore: “We aim to fundamentally change the competitive playing field through licensing and wide availability of DSP cores. That will help manufacturers…deliver higher levels of performance and miniaturization while accelerating time to market and lowering overall production costs.” According to Will Strauss, Forward Concepts’ president: “Backed by three leaders in DSP and communications, the new company has a great opportunity to proliferate the StarCore architectures to a broad cross section of semiconductor manufacturers, OEMs [Original Equipment Manufacturers] and ODMs [Original Design Manufacturers].”

    Popular development

    A very popular development with consumers would be the introduction of a dual mode handset which works with both GSM and CDMA (cdmaOne) based networks. Indeed, both Samsung and Kyocera (with the KZ850) have shown handsets offering such a capability. Both handsets are based on Qualcomm’s GSM1x technology. They will be aimed initially at the Korean market but crucially China Unicom has also announced its intention to supply dual mode handsets to its own customers — sourced from Samsung and Motorola. Clearly such handsets should soon find their way over to Europe. It’s a good example of how Qualcomm is trying to throw off its ‘CDMA only’ image as a chipset supplier. It recently announced, for example, that 14 leading manufacturers in China, Europe, Japan, South Korea, Taiwan and the United States are busy developing W-CDMA/UMTS products using its chips for scheduled 3G network launches in 2004.
    In terms of new developments in mobile handset design, despite the great media hype, handsets which actually support Wi-Fi in addition to, or instead of, Bluetooth are extremely thin on the ground. Motorola is planning to offer a Wi-Fi enabled handset for use on iDEN networks, for example. “We haven’t heard a lot about it really but since (Wi-Fi) is currently such a cut-throat business, I’m not sure we’d want to play in that market,” commented Richard Traherne, a consultant with the wireless business unit of Cambridge Consultants. His company previously spun-off Bluetooth chip specialists CSR (Cambridge Silicon Radio). What Traherne did predict was that software defined radio could play an important part in future mobile handset designs. “To date, it’s not been massively popular [with handset manufacturers] since it has the disadvantage of an overhead in terms of requiring extra resources,” Traherne said. “Although take-up is presently uncertain it can pay dividends for manufacturers since it would allow them to produce more handset variants per year from the same core engine design.”

    ‘Guts’ of a handset

    A similar approach has been taken by TTPCom with its Cellular Baseband Engine (CBE). The CBE is basically the entire ‘guts’ of a GSM/GPRS handset which runs on a single DSP. Other applications — like running an MPEG4 video clip; downloading an MP3 sound file; or running email client software – can then be run on a separate processor. Pascal Herczog, chief system architect with TTPCom, claims that separating the handset functions in this manner means that handset manufacturers won’t need to re-test a handset for type approval every time they make a slight modification to a handset’s design. Crucially this enables a handset manufacturer to produce a customised handset for one particular network operator, such as Vodafone Live!,  from a single, proven handset design. Better still, StarCore with its SC1400 chip, can physically put a DSP and an ARM processor onto a single piece of silicon — achieving a significant reduction in die size, costs and power consumption.

    While some observers might argue that chip development only relates to a small sector of the handset market, the latest figures for shipments in Europe for 2003 from Canalys prove otherwise. Between them, Nokia, Sony Ericsson, Motorola and Orange managed to ship two million units. So it would appear that chipset developments are bound to become more — not less — important in the coming year.

    Getting the message

    0

    The race is on to capture new 3G customers, but which marketing strategy will prove successful? David Adams looks at how the big players are shaping up in this crucial battleground.

    No-one ever went into the mobile phone industry for a quiet life. The last three years have been just as strange and unpredictable as the three before them, and with the launch of more 3G services likely to come in the next 18 months, placing bets on how the market will look three years from now would be reckless.

    What is certain is that the way that 3G services are presented to potential customers is likely to have a significant effect. Marketing these services is a real challenge, because they and the infrastructure that will be needed to provide them are still in an early stage of development, and because it will be important not to inflate consumer expectations as happened so disastrously with WAP. But even assuming that the road to commercial success in the 3G market is likely to be long and difficult, operators will want to start travelling along it as soon as they can, lest they give faster moving competitors too much of a head start.

    Oddly, the fact that it seems to have taken so long for 3G phones to get off the ground at all may actually work to many operators’ advantage, as it has served to deflate some of the hype and the novelty factor. Operators could even benefit from the way 3 is introducing the public to 3G. Although some have criticised 3 for promoting its packages on the basis of cheap voice rates rather than the 3G services themselves, arguably the conditions in which the company found itself at launch, with a limited infrastructure and with no single 3G application likely to capable of driving demand on its own, meant they had little choice. Nor should the success they have already had be belittled, according to Mark Cook, executive consultant at Cap Gemini. “It’s easy to bash 3 and it’s not always fair,” he says. “They’ve set themselves some extraordinarily difficult targets and come a long way in a short time. Remember that it took cellular phones four years to reach half a million customers.”

    Others are less generous. Andrew Wyatt, VP of marketing at Intuwave, can’t understand why 3 didn’t start by concentrating on business customers. “I think you have to go back to basics and ask what 3G provides, then identify the people to whom you think that thing would be useful,” he says. “Basically, it provides increased bandwidth, and that’s most useful for business customers. They’re also the people who can afford to pay the sort of money that will make this profitable for the operators. It’s much harder to see the consumer proposition. There are ways 3G could be useful, but there’s no one dominant service, so you need to build a portfolio of services and sell them through the value they offer, rather than for the technology itself.”

    3 is also handicapped (for the moment), by the fact that it is the only show in town. Who can you send your video messages to? SMS is popular because it is ubiquitous, because consumers know how to use it and they know how much it costs. It could be some time before you can say that about 3G services. On the other hand, many of the services don’t depend on ubiquity, a message that the next entrants in the market will surely have to push harder. Andrew Wyatt uses the example of Traffic-i, available on the Symbian platform, which allows users to monitor traffic problems wherever they are. “I love it, and I don’t need anyone else in the world to have that on their phone for it to be useful to me,” he says. But services like this cannot deliver the usage figures that operators will need to justify their investment in 3G licenses. Nor can “goals, gambling and girls”, as the phrase goes, and a target audience of ‘young males with money to burn’ sustain the market forever.

    There technical issues to overcome too. Recent research suggests that most consumers base their initial buying decision on the handset, but base loyalty to a network on the level of reliability it provides. This has serious implications for 3G services, because even the boys with toys will give up on phones that don’t do what they’re supposed to, or are too difficult to use. Intuwave has recently completed research that showed 30 per cent of customers who’d bought a smartphone didn’t understand all its features, and that 29 per cent didn’t know how to load the applications. And these are the technologically literate early adopters. 

    The other problem with having overly complicated phones is that it takes longer for call centre staff to help customers solve problems, if they are able to do so at all. “Quite often these turn out to be problems with the network,” says Andrew Wyatt. “That’s hardly encouraging the customer to trust the service.” Consumers will compare the new phones with what they already have, and all the new features in the world won’t make up for a simple lack of reliability from handset or network.

    And of course the network itself is still under construction. A look at the map of 3’s video mobile network on its UK website reveals a spindly twist of threads stretching from the south-east through most of the UK’s major conurbations that will look familiar to industry veterans. True, this delicate looking web now covers 70 per cent of the population, as the company announced triumphantly in December, but there are vast tracts of the country where it won’t be possible to use many 3G services for quite a while.

    So, if the value of 3G services is difficult to quantify in anything other than abstract terms, if individual applications only appeal to quite different niches of people, and if technical limitations will continue to dog some services for the foreseeable future, then the strategy 3 has pursued of using cheap voice packages as a main marketing tool suddenly looks perfectly logical. And effective. By mid-December it could point to 220,000 customers, with revenue per user of £44.56 per month. The company has supplemented its marketing drive with an effort to make its pricing plans unusually simple and easy to understand, as well as inexpensive, to try and capture as many customers as possible.

    “Fourteen per cent of our revenue comes from data services, and that’s in line with our business plan,” claims 3 spokesperson Ed Brewster. “Over the next few years we expect to see the proportion between voice and data revenues to change until eventually it’s something like 50/50, with the usage of both voice and data having grown and with the customer not spending any more money.” But Cap Gemini’s Mark Cook doesn’t think that can be sustainable forever. “Those prices are based on a need to grab customers,” he says. “At some point those tariffs are going to have to go up, otherwise you can’t stay in that market in a competitive way. And you’re going to have to find a new way to market these services to more people other than boys with toys.”

    This should become easier as more operators enter the market and more services are developed. Vodafone is expected to be the first. Like T-Mobile and O2, Vodafone isn’t saying exactly when services will launch, but it seems likely it will happen at some point in the new financial year. (By the time you read this announcements may have been made at Cannes.) Its data services are up and running already, and, perhaps more significantly, its brand awareness is unmatchable. Chris Solbe, communications manager at the UMTS Forum, believes the brand will continue to grow in importance as a means of differentiation. “If you look at the way Vodafone is selling its Live package at the moment, there’s almost no mention of the technology at all,” he points out. Vodafone and the other operators won’t have to spend as much time brand-building as 3 and already have structures in place to sell to consumer and business users.

    This leads Solbe to speculate that maybe the operators that follow 3 won’t make anywhere near as much of a fuss about launching the new services. “3G may roll out almost by stealth, as the latest stage of a journey that the operators and their customers started some years ago,” he says. He suggests some may try to avoid the technical difficulties of trying to build too broad a customer base too quickly by keeping prices high and concentrating on serving smaller customer bases to start with.
    Solbe also thinks that the way the operators target the new services may have to change. “I think the old idea of  consumer segment and a business segment needs to become a bit more sophisticated,” he says. “There are a lot of people who aren’t really one or the other in the way they use their mobile phones. The operators may have to be a bit smarter.”

    If the operators want to capture more of the mainstream consumer market then retailers will have to carry more of the marketing burden. At present, according to recent mystery shopper-based research, most mobile phone store sales are still made on price, not services — yet another piece of evidence that seems to vindicate 3’s decision to concentrate on cheap rates for voice, as is recent research carried out by the UMTS Forum and the Digital World Research Centre at the University of Surrey that suggests voice remains by far the most important function of a mobile phone to a majority of users.

    Meanwhile, the way marketing reacts with cultural factors also has important effects in the longer term, as is illustrated by changing attitudes to mobile phones in the past decade. Using a phone in public has become more socially acceptable (although within the last year my partner has been thrown out of a pub in Cornwall for answering a ringing mobile) and people in their 20s and 30s who don’t own a mobile are regarded as deeply eccentric by their peers. The UMTS and DWRC research also confirmed that people have closer emotional ties to their phones than to any other piece of technology. Somehow the operators will have to tap into this sentiment.

    Marketing has played its part in these changes. In future it may even be directed at those groups of the population who haven’t yet bought into the mobile dream. “I think as other operators enter the 3G market they may segment consumer offerings in more sophisticated ways,” says Chris Solbe. “At the moment most mobile phone adverts are all pretty much the same, it’s all about young, lively people. I would predict that we’ll start to see some more subtle segmentation, by age, by gender, by all kinds of things. You might start seeing mobile phone adverts aimed specifically at the over 60s. Why not?”

    Further into the future, the levels of technical support operators can offer will have to become a selling point. “If we get to the stage where there’s a mass market then there will be a degradation of the 3G service as more users come onto the network and fight for bandwidth,” says Cap Gemini’s Mark Cook. “How the operators cope with that will provide an opportunity for differentiation.” Andrew Wyatt believes the first operator able to offer some kind of service level agreement will pick up customers.

    It seems likely that by the end of 2004 many operators will have launched or be preparing to launch commercial 3G networks. Doubtless each will be monitoring the activities of its competitors even more closely and suspiciously than before. But although it remains to be seen how many 3G operators the market can support, in the short term the expansion of services and networks can only help make marketing easier. As 3’s Ed Brewster says: “We’re very supportive of a new entrant in the market: two companies driving 3G is better than one.”

    Bringing you Azure-ance

    0

    Operators are already missing out on profits from existing revenues, so how will they cope with the increasingly complex value chains new services and applications will bring? John Cronin, CEO of Azure, tells Keith Dyer how operators can cut losses, trade more efficiently and make sure they are ready for new market models.

    Mobile Europe: Azure is a company that is making a name for itself in the revenue assurance market. For those that haven’t followed the story, can you tell us about yourself and perhaps your place in the mobile market?

    John Cronin: Even though Azure became its own legal entity in April 2003, it’s actually been going for something like 10-12 years as an independent business within BT. We spun out as a profitable organisation and we have got around 200 people in Azure today. So it’s not like we are a normal start up. We’ve got deep knowledge and experts within Azure not only in software but in mobile, fixed and cable communications. Customers in the mobile area include 02, Telenor Mobile and Telfort to name three. And we’ve even done some work for Manx Telecom on their 3G pilot. So we do understand the mobile business.

    ME: What are the priorities for mobile operators at the moment?

    JC: When you really get down to it, what is a mobile operator? The core of a mobile network infrastructure is equal to a fixed network. What’s different is from the aerials out to the handsets. A lot of the difficulties the fixed operators have had over the years are the same for the mobile companies. Mobile operators are looking at delivering high volumes of content, doing reconciliation, audits and dispute management with no losses, which is exactly the same as we have been doing on the fixed side over a number of years. Two years back figures showed12.4% losses across the piece for operators and that has now grown to 13.7%. So in other words even if they didn’t sign up more customers, and just got their own systems sorted, then that’s 13.7% of revenue straight to the bottom line.

    ME: What is causing these losses within operators?

    JC: Its error to do with processes and internal procedures. Azure is all about revenue assurance within the networks — we look at interconnect, fraud, margin management, route optimisation, mediation management and network integrity. Also, I would suggest people are being more open and putting their hand up and saying, “Yes we do have a problem here.” Part of what our message has been is that people should get this to the board level and have someone accountable within these companies because it’s been fairly low in people’s priorities with nobody accountable at board level. To me it really fits in within the CFO of a company, some elements go to the COO but these are all inter-linked.

    ME: Are companies now being more open about fraud as a problem?

    JC: A lot of credit card companies have said in the past that there is no credit card fraud, but we know there is. Telcos say there’s no real fraud on their network — they’re not going to tell the world that they’ve got a problem, so it has to be done gently and low key.

    Nearly half of operators have identified fraud as a significant issue for them. Subscription, interconnect, roaming and internal fraud are the biggest areas. 29% of the mobile operators identified fraud is actually coming between operator to operator. We have developed a solution for interconnect fraud that no other competitor has. Some of our competitors do interconnect, full stop, or they do fraud on the retail side, but they don’t do the combined packaged that we do.

    On interconnect we would look at calls lost, calls not billed correctly, maybe calls not rated correctly. There was one mobile operator in Europe that found out, once we’d put an interconnect system in, that they were rating international calls at a national level. When they found that out, they corrected it and had a pay back within 11 working days, whereas originally they were looking at an ROI of 18 months.

    We believe that fraud today on mobile is something in the region of 3-7%. Now on the fixed side it’s round about 3% so already it’s been identified and agreed with the mobile operators as a bigger area — especially as more content becomes available. The value chain is much more complicated in the mobile arena than it is on the fixed.

    ME: How does interconnect differ from margin management?

    JC: I see margin management as really route optimising — being able to look at profitability on the routing of calls, and the trades people have made, as well as on the billing side. Where we’re going with this is to get to real times in terms of settlement. So we do it electronically and say, “This is the route we have agreed, as soon as that’s done I’ve invoiced you and hopefully you’ll pay me straight way” — helping cash flow and cutting down on manual intervention.

    ME: Is one of the problems integrating service launches at a marketing level with back office OSS and billing systems?

    JC: The pressure on the mobile companies now is to really get products and services out rapidly — to identify new revenue areas. When you look at data — it stands at about 10-20% of revenue today and that will grow as people find more and more applications. In doing that they can they fall down if they do not have the correct processes and procedures. You get marketing coming out with great products but sometimes not ensuring the service or the QoS is in the background and the infrastructure is there right from the interconnect to the billing. At the end of the day this is profit they are losing because they capture the customer yet they are unable to bill them correctly due to not having a proper billing system.

    So it is actual profit operators lose and by just having the correct people, processes and systems in place they can get round that. A lot of people have systems that are in-house, so you ask, “Have they kept them innovative? Have they become legacy?” The benefit of us is because we do this for several operators globally we can make sure we are ahead of the game in terms of what needs to be done to actually help.

    They’ve got to make sure that at the end of the day whatever they carry, they bill for. Some customers are very good and have a policy of CDRs of zero loss but that’s quite hard for mobile operators to do today because of the complexity and the evolution of where mobile operators have come from. They do need to be much more aware and actually put the systems and processes in place to combat that type of scenario.

    ME: Yes, with that ever-lengthening value chain the complexity just increases and mediation becomes ever more important as elements stack up on the network.

    JC: The solution to the complexity of the supply chain is being able to tag and identify that traffic and who gets a slice of that. I guess because the supply chain is quite complex the operators have got to get down to premium levels of service — platinum, gold and silver services that people sign up for — because bandwidth is a quite scarce resource in the last mile, whether it’s mobile or fixed. I think that’s where we’re part of that solution, though not all the solution. You need to get everybody in the same room and agree this is what we will do. So I do think the operators have to look at network local loop management more so than they are doing and coming up with solutions to that, and we recognise that we’re players within that.

    Mediation really is the hub of all of it, everything else spins off that, especially going from voice into IP. It’s looking at the events within the network and being able to record them and bill them that is the operators’ real challenge at the moment, ensuring the integrity of their networks.

    ME: Are there different ways OSS and billing companies can work with operators to help them keep up with their evolving needs?

    JC: We go from bespoke solutions to bureau and co-sourcing. Today a lot of the operators are struggling for capex and we have a model that is opex. Opex means it’s pay as you go so as we carry the traffic, you pay as you grow, which makes it easier for them.
    We will also look at a model of revenue share or savings of revenue. People like the sound of that, but when they see the size of that they say, “No we’ll go to pay as you grow because it’s a lot less!”

    What we did with BT was outsource out of BT the experts within interconnect. So what we are really all about is providing a service. We do licensed sales the same as some of our competitors, but we believe you will get a lot of companies will go in and do a report but then you’ve got to find someone to implement that, put the procedures, processes and systems in. We will do a report, but will also do something about it – take the report and ensure that we deliver against it. We can even take a batch of people’s CDRs and run it through our systems. Because we are number one in fraud bureau world wide, number one in interconnect bureau and because we have already got that up and running we can run the processes through our existing systems, and identify some of the losses. We do a pilot, then roll it out and if a carrier operates in several countries we can put it into one, then a second, and so on. Over the years we have rolled out across Europe to all the companies that were part owned by BT in that manner.

    Where I see us going is we’ll even outsource, if people have got resource or part of a system then we’ll take that to us then do an SLA back to manage it on behalf of that particular customer.

    ME: I hear you are also working on a clearing-house system?

    JC: Yes. One model that we are working on is a clearing-house. So rather than the Western model where operators have had their own system in place, what we’ll now say is put that into a clearing-house so all the operators can then share that cost. It helps with cashflow, because the clearing-house does the netting of traffic sent, and cuts down on disputes because it’s one integrated solution. Any errors or losses are identified swiftly because we are in control of the call end-to-end within the clearing-house, so it’s much better in payment terms.

    ME: Can you see this model working in Europe?

    JC: I can do for MVNOs, because that’s a whole new startup. MVNOs are experts in customer service, database management and customer loyalty projects, but we can actually put together a clearing-house they could all share. Another application might be for any new product sets where it’s common across several operators — they could look to do that.

    ME: It’s a forward thinking approach.

    JC: We are always looking at the future. First and foremost we looked at the architecture of the company’s skill sector experience, and at where this is all going. In areas like wholesale billing and margin management, there’s a space there. There’s not really a clear wholesale biller although there’s lots of retail billers. So that’s our capability and where we’re going.

    This may aid roaming, of course, but more importantly will enable use across spectrum bands within a network where operators are using lower frequencies for coverage, and higher frequencies for urban capacity. It may also mean being able to operate in both TDD and FDD spectrum.

    One company, IP Wireless, is using Altair’s Software Defined Radio (SDR) baseband processors to build LTE devices that can operate at specific frequencies. IPWireless’ LTE device will incorporate Altair’s FourGee-3100 baseband and FourGee-6200 RFIC chipsets. The FourGee-3100 is a 3GPP LTE baseband processor that supports LTE category 3 (CAT-3) throughputs (100Mbps/50Mbps DL/UL respectively). The chip implements a 20MHz MIMO receiver and is based on a proprietary SDR processor which offers increased performance but reduced power over traditional DSP cores. The chipset supports FDD as well as TDD versions of the LTE standard and has undergone interoperability testing with OEMs.

    Jon Hambidge, of IP Wireless, said that the first product has been designed with a specific operator in mind. It will operate at 800MHz, 1800MHz and 2.6GHz. It will be available in the final quarter of this year, he said. Later devices would cover the US 700MHz bands, he said. Hambidge cited Vodafone Germany as one operator that is looking at such a deployment of LTE across different spectrum bands.

    With experts now expecting there to be up to 15 LTE radio bands, the key for device OEMs will be to have platforms that enable them to select radios accordingly. Software Defined Radio and modems lend themselves to this. They also reduce power consumption – a critical factor as the industry looks to move from dongles to actual LTE handsets.

    Icera Wireless is one company that has demonstrated multimode soft modems, showing LTE/HSPA interworking on its commercial HSPA sticks at Mobile World Congress this year using its Adaptive Wireless feature set and Livanto soft baseband. The company recently took $45 million financing to accelerate its product development.

    An Icera spokesperson said that its next product would be out before the end of the year, and would enable LTE and HSPA.

    Another company targeting the soft modem space is UK start up Cognovo – a company formed initially founded by TTPCom executives, before later incorporating ARM’s Ardbeg Vector Signal Processor activity.

    Cognovo’s CEO Gordon Aspin said that the ARM VSP enables the company to design a Software Defined Modem platform that is dimensioned for handsets and portable devices capable of LTE Category 4 (150Mbps) but also scales to support multi-mode operation with other standards.

    This puts it ahead of Icera, he said, describing Icera as a 1.5G version of the SDM.
    Aspin also added that another advantage of the SDM is that its performance can be upgraded even once the device has been shipped. He said you could even see “white label” modems developed that have software downloaded onto them later. That raises the question, then, of who pays the IPR if software is being downloaded later than the manufacturing stage?

    “It’s potentially a very disruptive technology. You could create devices that have the world’s best wireless processor but independent of any particular air interface.”

    Tuned to the key of RF

    0

    As 2G cellular network operators focus on wringing higher performance out of existing assets, the key to network optimisation may well lie in tight RF footprint management coupled with sophisticated network monitoring tools. Ellen Gregory of Relate Technical Communications explains.

    After two decades of rampant development in the cellular industry, it might now be a period of consolidation and gradual transition; but it is also a period of extraordinary challenge. Not only do consumers and business practitioners have high expectations of the quality and availability of cellular services, but the demand for ‘vertical’ technologies — such as data services overlaid on existing 2G networks — is escalating. Operators too are regarding 2.5G services as a potential means of growing their businesses — particularly in mature markets where subscriber take-up rates have petered out.

    As capital investment is minimised and operators seek to maximise returns from existing assets, network planning and optimisation have become more critical than ever. Yet, the more complex the network, the more layers of services, the greater the number of parameters that have to be considered and made compatible. With traffic patterns shifting so readily and networks interfering with each other, the art of network optimisation has, to the uninitiated, started to look like ‘black magic’.

    “One of the earliest things that we learned in the days of analogue was that a cellular system never moves out of the design phase,” says wireless communications consultant, John Smyth, who spent 15 years as National Manager of RF Systems, Mobile Networks, for Australian-based telecommunications company, Telstra. “Infrastructure is being added, traffic patterns are continually changing — so you need to have flexibility within the system to cope with it.”

    System flexibility is perhaps the credo of network optimisation, which is a constant balancing act between coverage, capacity and the increasingly important quality of service. “Because of this demand for flexibility, operators need to keep constant track of network performance — twenty-four seven,” says Smyth. “This is the first step in optimising a network.”

    Head of radio planning systems for T-Mobile in Germany, Wolf Mende, concurs with this view. Key performance indicators (KPI) such as hand-off success rate, call drop rate, hold time, and congestion are continuously monitored to provide indicators of areas that might require tuning. “We also take dedicated field strength measurements of the involved base stations where they are required,” Mende says. “These provide an image of the real interference situation in a network, as opposed to what we might have predicted using models.”

    Co-channel challenge

    Both Mende and Smyth acknowledge that one of the greatest challenges of network optimisation is controlling RF interference. In GSM networks, co-channel interference degrades audio quality by masking low level carrier signals; whereas in CDMA-based networks, capacity is depleted by interference, which increases the noise floor. Either way, the result is inferior network performance, providing dissatisfaction to users.

    “In mature networks, it’s not about coverage. It really boils down to managing interference,” says Smyth. “The object is to put the signal where it’s wanted, and keep it from where it’s not wanted. So managing RF is one of the basic steps in planning and subsequently optimising a network.”

    “It’s a kind of puzzle,” adds Patrick Nobileau, Vice President of Base Station Antenna Systems for wireless technology group, Radio Frequency Systems (RFS). “With so many cells, if you want to optimise your network, you have to make sure that you transmit the energy without creating too much of an overlap. This is particularly important for CDMA systems, where the same frequency is used for each cell.”

    Careful frequency planning of networks provides a measure of transmission quality  on a macro scale; however, the troubleshooting of problem areas invariably leads to a spot of local tuning. In such cases, it is the antenna beam that can be adjusted, says Mende. “We use information from the live system for measurements. Once the results are analysed, we can decide how to change parameters such as the antenna direction, downtilt, and transmission power.”

    However, mere tilting of the antenna beam can be a cause, rather than a cure, of co-channel interference. Without rigid control of the RF energy generated by an antenna, the spurious side and rear lobes can be thrust in the direction of neighbouring or nearby cells, creating the potential for interference. In mature markets, where there are many coexisting — and co-located — services and operators, cell interference issues abound, providing many headaches for network optimisers.

    RF where it’s useful

    The need for improved control of RF energy has led to ongoing developments in antenna technology aimed at reducing spurious emissions and providing tighter control of the antenna footprint.
    “Antennas play a critical role in networkoptimisation — they are a major part of RF management,” says Vibhore Bharti, Manager RF Planning with Indian cellular operator, Idea Cellular. “Improving antenna efficiency — the ability to control frequency pollution — is a vital element.”

    “What is needed is a clean propagation of RF energy — putting the energy where it’s useful and not where it’s unwanted,” says Nobileau. “The suppression of side and rear lobes, and footprint tailoring using electrical tilt, are therefore very important. This is particularly so as cells become smaller and smaller — the more you tilt, the greater the potential for interference.”

    The impact of interference in GSM networks is generally measured as the ratio of carrier signal (C) to co-channel interference level (I) — or the C/I ratio — where minimum C/I values for acceptable voice quality are 9 to 10 decibels. It follows that reducing interference will improve C/I, and in turn yield improvements in audio quality and network capacity.

    Nobileau reports that studies show a strong correlation between C/I improvement and the magnitude of suppression of antenna upper side lobes. Maximising side lobe suppression has therefore become a focal point for antenna designers and manufacturers in the quest for interference reduction. Where once side lobe suppression was typically in the range of 12 dB, the target is now 18 to 20 dB — with RFS achieving typically better than 20 dB across the entire tilt range with its Optimizer antenna series.
    “The smaller the side lobe compared with the main lobe, the better the antenna will fight co-channel interference,” says Nobileau. “But if it’s not the first upper side lobe that potentially interferes, it could be the second — so every unwanted signal needs to be as small as possible.”

    Smyth agrees that the first step of RF management should take place at the antenna, citing electrical downtilt capability as an advantage for cell planning and management of modern mature networks. While mechanical tilting of the antenna beam is simple to implement, it has little impact on spurious side radiation, and may even increase interference from the rear lobes. Electrical tilting technology, on the other hand, tilts all lobes — main, rear and side — to the same angle. This means that side lobe radiation can be managed across all tilt angles, providing greater interference control.

    Point of contact

    “The base station antenna is the primary point of contact with the customer,” Smyth says. “It seems strange to me that operators would spend hundreds of thousands of dollars acquiring a site and developing it, only to quibble about the extra hundreds of dollars invested in antenna technology and its maintenance. What you’ve got at the base station counts for nothing until you actually launch it out into the ether and point it in the direction of the customer.”

    This raises an interesting issue to be considered by network operators: the merit of upgrading existing antenna technology to higher performance antennas. According to Nobileau, doing so provides an incremental improvement in capacity that defers the necessity of deploying next generation services for the sole purpose of meeting capacity demands.

    Smyth believes that this path should be attractive for operators seeking to maximise returns from existing assets. “By replacing existing antennas with higher performance antennas, you achieve the flexibility to cope with changes at a much lower cost. The level of general interference will go down and drop-out rate will reduce — holding times and utilisation of the network will go up.”

    Of a Manhattan USA operator that recently replaced all base station equipment for an entire network, he adds: “I would have liked the opportunity to prove that by spending about a tenth of the money in upgrading to more advanced antenna systems, they could also have achieved a significant improvement in service and increase in ultimate capacity.”

    On the other hand, T-Mobile’s Mende is more cautious: “Better antennas always help,” he says. “As new sites are deployed, it’s always good to look for the antenna solution with the best interference suppression. But it’s always a question of whether to replace existing antennas!”

    Evolving challenges

    As networks continue to evolve and new technologies emerge, the role of optimisation will only increase — in terms of both regularity and importance. For instance, it is anticipated that 3G services such as real-time video transmission will lead to dramatic and unpredictable cell traffic-loadings, adding coverage challenges to the optimisation puzzle.

    T-Mobile is soon to launch its German UMTS network, and Mende is keen to take note of its operation. “There are many challenges awaiting us,” he says. “We expect 3G networks to be more sensitive to interference than GSM because the network dynamics are different. We’ve computed it all theoretically, but now the time comes to see how close we are.”

    The anticipated optimisation challenges of the future have fuelled the demand for remote antenna tilting technologies — essentially, the ability to adjust antenna downtilt from locations other than the top of the tower.  According to Smyth, the benefits of remote tilting are many: from eliminating the cost of hiring equipment for tower access, to avoiding impacting other  operators with base stations at the same site, and streamlining regular tilting operations as might be required during network redesign.

    “The futuristic vision is for operators to dynamically adjust the network as traffic patterns change throughout the day,” says Smyth. “I envisage there might emerge a set of ‘presets’ for various traffic situations; where all antennas move in a coordinated fashion to a particular cell plan. This would apply particularly to CDMA-based systems.”

    Mende has a similar dream, where ‘closed loop’ control between network monitoring, planning and operation exists. “It would be advantageous to follow the network dynamics — seasonal and weekly changes in behaviour. We can see where the traffic goes and then optimise those areas. In order to achieve this quickly, we’d need remote control of the antenna systems,” he says.

    Nobileau’s view is that remote tilt is just another essential feature of the multi-functional high performance antenna. “First you need an antenna that can control the side lobes; second, you need to be able to activate it remotely; third, you need to be able to feed the antenna with the network management information needed for the best possible optimisation scenario,” he says.

    For the moment, however, the bottom line is that operators in many countries are having to deal with mature 2G markets where subscriber growth has flattened out, and this month’s balance sheet is the commercial reality. It then comes down to whether or not the network can handle the demands placed upon it — particularly as they are compounded by the additional demands of GPRS and EDGE services.

    The technology is available: not only high performance antennas for controlling interference, but also sophisticated network monitoring tools to complete the optimisation loop. The key is to take a long term view; the implementation of new technologies now will provide immediate results as well as equipping networks for the future.

    The race is over?

    0

    It didn’t seem likely, but vendors are now claiming that every European operator is considering EDGE and even going so far as to declare the discussions over. Keith Dyer reviews how the race was run.

    The old Irish line about giving directions — “I wouldn’t start from here”— is perhaps apposite to the story of how the mobile industry got to where it is with EDGE today. But is it the arriving, not the travelling, that counts? Or is the journey itself important? That would depend on how much time and money you thought you had wasted to get to where you wanted to be…

    There’s not exactly a re-writing of history going on about EDGE at the moment, but there is perhaps a little bit of revision, a few nips and tucks around the eyes to ease out a few creases. You don’t have to be overly blessed with memory to recall, that in the late 1990s EDGE was sold as a definite intermediary stage between GPRS and W-CDMA. There were many powerpoint slides around showing a an arrow moving up and to the right from 9.6kbps GSM through GPRS (or HSCSD first in some presentations) then EDGE and into a 3G future in about, oh, early to mid 2002, at the latest. You also don’t have to be blessed with more than a healthy dose of cynicism to think that perhaps the vendors themselves thought that this was a fine idea because operators could pay for GPRS (an upgrade to GSM) and then EDGE (as its name suggests an evolution, and enhancement) before making the jump into full W-CMDA itself.

    But then the road got a little more twisty. GPRS proved not to be a mere software upgrade. There were handset and interoperability issues and it took years for GPRS handsets and services to be widespread. Meantime the markets had gone very sour, and the operators slightly queasy about the debts they were saddled with as a result of spectrum licence fees. Even those who got spectrum on the cheap were looking down the barrel of perhaps a billion dollar programme to roll out UMTS across a larger European country. So the view on EDGE changed, and operators started to question whether it was strictly necessary to have GPRS and EDGE and WCDMA. For a long while there were a host of “EDGE is dead” declarations and indeed whole market reports from the usual suspects predicting that while EDGE would be useful in new territories, and in the USA, it was worthless to European operators.

    But now, listen to this. Alan Hadden, president of the Global mobile Suppliers Association (GSA), says that pretty much every European operator is likely to make some commitment to EDGE. “We don’t make announcements for them but our real belief is that most will go [for EDGE],” he says.

    Peter Reinisch, vp marcomms for Siemens ICM Mobile and also a GSA executive committee member, says, “One operator ceo recently said that there is now no European operator who can afford not to go to EDGE. Behind the scenes there is a lot of discussion. “

    To try to get behind some of these scenes, Mobile Europe spoke to a  strategist at one UK operator who did not want to talk about his own plans, but he did agree with Reinisch that it was back on the agenda.
    “I think there has been a change of perception about EDGE, but a lot of that has been that some people thought that GPRS would be able to do the job that perhaps we will now need EDGE to do. That doesn’t mean there is anything wrong with GPRS, but that perhaps there was an underestimate of the severity of the degradation of the service levels between UMTS and GPRS.”

    One French operator had a slightly different take on it. “It’s not the case that we are now using EDGE where we previously thought GPRS would do. It’s more that we need to be able to offer different levels of service to customers. GPRS has a role, for certain, EDGE has another and UMTS will be on top of that.

    That is quite a turnaround, so how did we get there? The GSA’s Alan Hadden has seen, through his organisation’s EDGE Operator Forums, attitudes change over the last two years. “EDGE was an example of needing to explain to operators how it could help their business. In the last two years we have been running the EDGE Operator Forums which brought together operators. In the beginning it was vendors who at that time were committed to EDGE, and who showed operators they could do a lot with the enhancement. In rolling that out they could address new market segments, start earning new revenues, influence the market and help pave the way for WCDMA.

    “It was never a case of EDGE or WCDMA, which in a way was the perception. Both are 3G technologies. We began to see in 2003 a much stronger acceptance of EDGE, and now we see that job as more or less complete.”

    Torsten Hunte, business development director for radio access at Ericsson, says the vendors were at least in part responsible for the earlier perception. “The biggest problem we’ve had was in 1999 we made the mistake in calling it EDGE and not GPRS Release 99.  And that started the problem of operators perceiving EDGE as a new technology —which it isn’t. We had to make operators understand it is not  anew technology, it is just an improvement in standard allowing higher data rates in the GSM network. It’s great for data rate and capacity with very, very low investment cost.

    “Vendors do not want to sell a crappy solution to make a few bucks. They want operators to make money, be successful so they can come back and buy again,”
    That EDGE was not in fact an expensive additional cost was one of the key messages vendors were keen to get out at the forums. 

    Reinisch points out, “EDGE is in existing spectrum and allows 3G services to be rolled out immediately, whereas GPRS was intermediate.” Hadden says that the incremental cost of migrating to EDGE from GPRS is about $1-2 per customer covered. This helped make operators understand it was affordable, he says. “It’s certainly the argument they use,” says the UK operator, “and I have heard that number before. But when you’ve got millions of subscribers that’s still not a small sum and operators need to know that their return on investment from that incremental cost is secure.”

    Reinisch says that getting ROI from the investment is as much about operators changing their own mindset as they move to more advanced services. “Operators always fought a price war, not a services war. This was the main problem they had.
    In Europe we saw more and more operators look at how they will complete their WCDMA rollout, and at how EDGE can in fact help them reduce time to market and capex.

    “TIM was a significant point in April 2003. [When Telecom Italia Mobile announced it was proceeding with an EDGE network with Ericsson]. They acquired spectrum for UMTS but it was not either/ or. They are using EDGE to get there quicker, de-risk the investment and earn revenues earlier. The TIM statement was, ‘We will have PCMCIA cards to offer businesses instant access to their networks.’ That’s a major point. The whole thing is marketing, not a technological issue. Operators have to start to segment the market.”

    “The vendors are keen to make out that it is about operators being more savvy, and to extent that is true. But it’s worth remembering that there were and still are technical issues with the equipment as well. For instance, it will be interesting to see how EDGE-compatible handsets hit the market, and how the developers cope with dual-mode EDGE/ WCDMA requirements as well,” counters the UK operator strategist. Hadden is confident it is something the market will address as demand increases. “We are seeing that more and more operators will take the step to enhance to EDGE.  Then we will see devices coming through to the market that are WCDMA and EDGE capable.”

    For Reinisch the important thing about EDGE is that it will not be a visible “layer” of technology to the end user. “Operators who have launched EDGE don’t necessarily talk about it, it is just there, delivered as services.” Which, for a technology obsessed industry, is perhaps the biggest step of all.

    Virgin and T-Mobile make peace

    0

    T-Mobile, Virgin Group and Virgin Mobile have settled all outstanding litigation, and established new agreements, including an enhanced telecoms supply agreement running for a minimum 10 years.

    Virgin is also acquiring T-Mobile’s stake in Virgin Mobile, although T-Mobile will have the right to receive 25% of any value over £550 million, up to a maximum payment of £100 million, in the event of any future sale or float of Virgin Mobile, within the next two and half years.
    The settlement also includes the end of the monthly marketing support contribution paid by T-Mobile. Virgin Mobile will also be able to receive inbound, as well as outbound, call revenues
    Brian McBride, managing director of T-Mobile UK, said, “This is a great deal for T-Mobile, for Virgin and for Virgin Mobile. It provides substantial benefits for all parties. It’s also been a pleasure dealing with the Virgin team.
    “Any disagreements of the past are well and truly behind us, and we all look forward to a long and mutually rewarding relationship.”
    Tom Alexander, chief executive of Virgin Mobile, said, “This is a new era for Virgin Mobile, one in which it will continue to thrive and prosper through its re-energised network partner and simpler corporate structure. We are delighted to have found a way forward which suits all parties, consigning the distractions of the past to history.”

    Deal brokered as numbers underline Virgin’s importance

    0

    T-Mobile added a total of 7.1million customers globally during 2003, the operator said, with almost half of that number being accounted for by T-Mobile USA. There was also a strong contribution to customer numbers from UK MVNO Virgin Mobile with whom the operator has finally settled outstanding legal action.

    The operator added roughly the same number of customers as in 2002, giving it an overall total of 61 million customers,
    In Europe, the operator’s home market proved as important as ever, with 705,000 additions (441,000 contract) in the fourth quarter taking total numbers up to 26.3 million, an increase of 1.7 million customers over the year. 
    In the UK it was slightly different story. Although the operator reported an overall increase of 1.2 million customers in 2003, an increase of 9.7% on the 2002 figure, much of this was accounted for by MVNO Virgin Mobile, which balanced the effect of T-Mobile’s “extensive streamlining” of its direct UK customer base in 2003.
    Virgin added 1.26 million customers itself during 2003, with 506,448 of them joining in the final, Christmas, quarter. Virgin Mobile had 3,644,795 customers at the end of 2003. T-Mobile UK had 13.6 million in total, including the Virgin numbers.
    In the Netherlands there was a 13.2% increase in subscribers to just over two million customers, whilst in the Czech republic 400,000 people joined the operator, taking the total to just under four million. T-Mobile Austria showed flat numbers at around two million.
    It was in the USA where most headway was made. Despite the high level of churn, T-Mobile USA increased  customers by 51.5% compared with the previous quarter to 1.02 million customers. Throughout all of 2003, the number of customers increased by over 3.2 million to over 13.1 million.

    - Advertisement -