More
    Home Blog Page 4

    Chinese network of thousands of fake shops accused of massive scam

    German cybersecurity Lab triggers joint investigation into “vast web of fake shops” that apparently “took money and personal details from 800,000 people in Europe and US”

    A joint investigation by the Guardian newspaper in the UK, Le Monde in France and Die Zeit in Germany unearthed a network of some 76,000 fake websites offering designer goods. This was after the German Security Research Labs (SR Labs), a cybersecurity consultancy, obtained gigabytes of data which it shared it with Die Zeit

    The fake shops appear to have tricked more than 800,000 people in Europe and the US into handing over card details and other personal data such as names, phone numbers, email and postal addresses.

    The fake store fronts claim to offer discounted goods from brands including Dior, Nike, Lacoste, Hugo Boss, Versace and Prada.

    The Guardian stated, “A trove of data examined by reporters and IT experts indicates the operation is highly organised, technically savvy – and ongoing”. The websites are published in multiple European languages and it seems the first ones appeared online in 2015.

    The newspapers estimate that although many are now inactive, about 22,000 are still live.

    The report says more than 1 million ‘orders’ have been processed in the last three years alone. Not all the payments went through, according to analysis of the data, but the fraudsters attempted to process up to €50 million in the three-year period.

    Open RAN: Vodafone, Nokia complete lab trial with Arm and HPE

    0

    Meanwhile, in Washington DC, the National Telecommunications and Information Administration announces $420 million in new funding for Open RAN

    Vodafone and Nokia announced that they successfully completed of an Open RAN trial, working with chip-maker Arm and Hewlett Packard Enterprise (HPE). They used Nokia’s anyRAN approach in the trial to demo an end-to-end Layer 3 (L3) data call. The trial ran at the vendor’s Open RAN Innovation Center in Dallas, Texas.

    The test platform used Ampere – Arm-based general-purpose processors – an HPE ProLiant RL300 server, plus Nokia’s Layer 1 (L1) accelerator, RAN software and 5G SA Compact Mobility Unit (CMU) Core.

    The trial took place over-the-air with Nokia’s AirScale massive MIMO radios on the n78 spectrum band (3.5 GHz band). The Finnish vendor’s Nokia’s MantaRay NM network management system provided a consolidated network view for monitoring and management.

    Diverse suppliers

    During the demonstration, data calls were conducted using commercial user devices. According to Nokia, the trial shows that Arm Neoverse-based processors on HPE servers within the Nokia anyRAN approach can support diverse suppliers and the efficiency of the latest silicon technology for Open RAN.

    Francisco Martin, Head of Open RAN at Vodafone, said,“Vodafone is dedicated to supporting the development and adoption of Open RAN platforms by fostering a diverse ecosystem of silicon solutions.

    “The approach offers numerous benefits, including increased choice, enhanced energy efficiency, higher network capacity, and improved performance in wireless networks. We are excited to collaborate with Nokia, Arm, and HPE in this live demonstration, and the initial results have been promising, paving the way for future commercialization.”

    And obviously, not only for Vodafone, which chose to carry out the trial in the US.

    Developments in Washington

    And on that note, the US’ National Telecommunications and Information Administration (NTIA) announced a further investment of $420 million (€390.2 million) for its Wireless Innovation Fund. It has already awarded more than $140 million across 17 projects in its first round of funding.

    Assuming the second round of funding is fully allocated, that will still leave the NTIA with close to $1 billion for future investment.

    This second round funding is to related to the development of new radio units. There are two categories available for those wishing to apply.

    In the first, radio suppliers and operators must partner to develop commercially viable, open radio units. The NTIA said it will award between $25 million and $45 million per project in this category.

    Cloud adoption for data ops must focus on cost, agility and innovation 

    Telekom Srbija, Hrvatski Telekom and Celfocus share their experiences around moving data ops and analytics to the cloud in the telco industry

    The discussion took place at our recent Telecom Europe Telco to Techco virtual event, moderated by Analysys Mason principal analyst Adaora Okeleke. Our panellists were Telekom Srbija chief strategy & digital officer Natali Delic, Hrvatski Telekom director of big data analytics Goran Klepac and Celfocus data & analytics offer lead João Barata. 

    WATCH THE FULL SESSION HERE 

    Moving towards self-service analytics in telcos 

    Telekom Srbija’s Delic acknowledged the prevalent hype surrounding Generative AI but emphasised the importance of focusing on fundamental aspects such as data management processes and organisational education. She stated: “There is a huge rise of demand for advanced analytics for quality, generative AI and for AI solutions, but we are still dealing more with fundamentals.”  

    The telco is prioritising efforts to transition towards self-service analytics and optimise the utilisation of technology to create value. “We need to go more to self-service mode, instead of having small groups of people dealing with all the analytics requirements of the company,” she said. Additionally Telekom Srbija is evaluating on-premises versus cloud solutions, to align with its future objectives. Overall, the focus remains on leveraging AI to enhance operational efficiency and improve overall performance. 

    Hrvatski Telekom’s Klepac underscored the recurring pattern of technological hype, emphasising the need to prioritise data organisation and democratisation over adopting new technologies like AI. “What is the most important thing you have to think about the data how to organise the data, how to make data democratisation,” he said. He cautioned against solely relying on sophisticated AI tools like TensorFlow without a coherent strategy for data integration, warning that without such a strategy: “you will get garbage in garbage out.”  

    Choose your cloud battles 

    Klepac warned processing in clouds is not cheap and you have to understand which kind of the process is the mostly should be used in that way. He gives an example of a big data environment that was having Oracle issues impacting its ability to support some projects. The company he was consulting with moved the work to the cloud and while the results were better, they were still below expectations. “You can use, for example, traditional algorithms like associative algorithms or CN2 or something like that, but even if you’re moving to cloud the results will still be the same,” he said.  

    In this case the operator used frequent pattern tree to develop a local solution and solved the issue quickly – only using cloud services for some pre-processing. “The fact is that you have to understand the processes and you have to deeply understand how algorithms are working that you can use the best from both worlds.” 

    Delic discussed Telekom Srbija’s current project evaluating the transition of workloads to a hyperscaler cloud provider, emphasising the need for a thorough business justification to avoid potential cost without benefit. She said: “We created a plan from the beginning to see whether this will be justifiable from a business perspective.” 

    Two primary factors are propelling the telco’s cloud consideration: firstly, the flexibility and scalability offered by the cloud compared to on-premises infrastructure, particularly crucial for rapid model iteration. She said: “It’s not always possible to have elasticity when you’re on-premises which you have when you’re in the cloud.” Secondly, she stressed the cloud’s potential to accelerate innovation by providing easier access to experimentation and proof of concepts, particularly in AI solutions.  

    She said: “We are hoping to have easier access and an easier way to do a lot of experimentation and proof of concepts with many of the available cloud solutions.” This shift aims to prioritise business agility over infrastructure concerns, anticipating a faster pace of innovation facilitated by cloud adoption. 

    Using the cloud to test an on-prem scenario 

    Celfocus’s Barata highlighted three key points about moving to the cloud: the significance of operational costs, including both cloud running and data platform maintenance expenses, the necessity of scalability in managing increasing data volumes to become data-driven organisations, and the importance of business agility driven by AI and Generative AI use cases, prompting requests for IT infrastructure and capabilities.

    He emphasised the need to reduce data platform maintenance costs and stressed that scalability and business agility are the primary drivers behind requests to move to cloud platforms. “I believe something is something that when we see our clients they say they are trying also to reduce the cost of maintaining a specific data platforms for the infrastructure purpose, but also for maintenance purpose,” and concluded, “But if I need to select just one problem scalability or business agility is driving more these cloud requests to move to platform.” 

    Data privacy is top of mind 

    Telecom Srbija’s Delic addressed several crucial points regarding cloud adoption and data privacy. Firstly, she emphasised the challenge of securely connecting on-premises environments with hyperscalers, particularly in hybrid environments. She notes, “It’s not realistic that everything will be soon in the cloud, it will be combined.”  

    Secondly, she highlighted the local interest in data privacy and the need for meticulous planning and compliance with laws. “Most of the companies do have some processes around it,” she said. Delic underscored the importance of evaluating data processing and ensuring compliance with legal requirements when considering cloud migration. She concluded by stressing the significance of data management and processing in the context of cloud adoption, emphasising that it’s more about processes than just technology. “So it’s not so much about technology. It is a little bit if we need anonymisation, if we need data masking, because for some use cases, there’s some data this might be required, but in general, it’s again more on the process of how we manage this data how available for some other for analytics in general.” 

    WATCH THE FULL SESSION HERE 

    The discussion also explored:  

    • GDPR compliance is crucial when moving to the cloud, prioritising data protection and privacy 

    • Organisations must minimise risks, especially in cloud environments, despite their inherent security 

    • Data preparation and cleaning are essential for data science and AI projects, with significant cost-saving implications 

    • Planning data migration to the cloud strategically can optimise costs by eliminating redundant or unnecessary data 

    • Smart processing methods can further reduce costs and enhance efficiency in cloud usage 

    • Effective planning and data quality preparation are key to cost-saving and efficient cloud utilisation 

    • The cloud serves as an ideal sandbox for testing methodologies and proof of concepts but requires conscious cost management 

    • Proper data quality preparation can significantly reduce processing costs and maximise cloud benefits 

    • The role of AI in data analytics 

    • Why telcos need to be wary about the hype surrounding AI 

    • On-prem versus cloud for data and analytics 

    • Using cloud for pre-processing  

    • Testing models and running tests in parallel 

    • Eliminating infrastructure concerns for faster innovation 

    • Advancing towards security objectives through innovation and process improvement 

    WATCH THE FULL SESSION HERE 

    Freenet launches eSIM-based data roaming with 1Global 

    The new service freenet Travel will be available in more than 100 countries and includes a variety of data packages

    German service provider freenet has teamed up with eSIM operator 1Global to launch its own mobile data roaming service, freenet Travel, which new and existing users can activate on their smart devices.  

    To use the service, customers download an eSIM profile, free of charge, using a QR code on the telco’s website. With the activated eSIM profile from freenet Travel, the user is automatically sent a roaming offer via SMS when moving to a country outside the EU. According to freenet’s website, 1GB, 5GB, 10GB and 20GB packages are available.  

    Once downloaded, a 1Global eSIM can be used as many times as needed in any of the countries supported by the company’s network. “By using 1Global eSIM capabilities, we are embracing a future-proof, modern technology. “This enables us to further expand our positioning around digital lifestyle offerings,” said freenet head of digital lifestyle Salomé Andrade Pohl.  

    Depending on the country, Freenet promises savings of up to 50% compared to alternative offers. For example, 1 GB of data volume for seven days in Switzerland starts at €3.99, while for the US and Turkey the same volume can be booked for €5.99. 

    “We are among the few providers worldwide capable of supplying and configuring eSIMs ourselves. Through this partnership, millions of freenet customers can now book various data packages for countries outside the EU, allowing them to communicate cost-effectively during their travels,” said 1Global founder and CEO Hakan Koç.  

    Phoenix from the Truphone ashes 

    Founded in 2022, the company was born out of adversity, being the UK-based mobile network firm Truphone which was put up for sale due to one of its backers being a certain Roman Abramovich. At the time it was reported that Hakan Koç and Pyrros Koussios picked up the asset for the princely sum of £1. Truphone employed around 450 people, and had coverage across 200 countries, with 400 direct network agreements in place. 

    Headquartered in London, with an R&D hub in Lisbon, 1Global now has 400 employees across 12 countries and has been granted fully regulated MVNO status in nine of them. 

    Besides telcos, the company is also targeted fintech and travel firms. For example, fintech Revolut has integrated the 1Global API to give fast access to its low-cost global mobile network. Users can install the eSIM from within the Revolut app in under a minute. This gives them complete control of their connection, including a display of data consumption. They can also use the app to buy a data plan even when they’ve already run out of allowance. 

    Freenet instead chose the third party/affiliate relationship with 1Global, using the QR code that directs the user to download their eSIM and 1Global software. 1Global has developed a complete eSIM ecosystem including Remote SIM Provisioning (RSP) and Entitlement Servers, ensuring the “highest quality throughout the eSIM deployment lifecycle”, according to the company. 

    As a telecommunications technology innovator, we have created our own global telecommunications network and can issue eSIMs and an International Mobile Subscriber Identity (IMSI),” said Koç. “This distinguishes us from vendors who do not have their own network infrastructure and simply resell data packages.” 

    Deutsche Glasfaser opts for Nokia to deploy fibre broadband

    The vendor’s suite of fixed access and IP network products plus professional services will help the German altnet design an optimised, “highly automated network”

    Deutsche Glasfaser says it has partnered Nokia to further its ambition of tripling its installed base of 2 million homes passed. Deutsche Glasfaser is funded to the time of €7 billion investment by its owners EQT and Omers.

    The two plan to build a country-wide fibre broadband network in Germany, using Nokia’s suite of fixed access and IP network products. This includes supporting Gigabit passive optical networks (GPON), 10Gbps Symmetrical PON (XGS-PON) and 25G PON on the same fibre, along with its Broadband Network Gateways (BNG).

    Nokia’s kit will replace two competitors’ installed basis of IP core, BNG and edge routers including related services.

    Design and validate

    The vendor will lead the design and validation of Deutsche Glasfaser’s new network architecture, customise its IP domain controller (or Network Service Platform) and the fixed access domain controller (Altiplano Access Controller). Nokia will also deploy services for the installation and the commissioning of its own systems.

    Pascal Koster, COO at Deutsche Glasfaser, commented, “10 Gbps XGS-PON access technology will be deployed as standard across our network from April 2024 onwards in partnership with Nokia. The first XGS-PON customer connections are already live in the area of Neuwied…[introducing] symmetrical broadband services up to 10 Gbps for homes and business.”

    The picture shows Linz am Rhein in the district of Neuwald, in the Rheinland Palatinate.

    Looking to the future

    He said that Nokia’s Internet Protocol Multi-Protocol Label Switching (IP MPLS) systems are ready to carry traffic are 800Gbps when the appropriate time comes. Also, that its optical line terminal and Quillion chipset will “future proof our fibre network”. The Altiplano and NSP controller support the operator’s plans “to ease and highly automate network operations across the entire region”.

    Matthieu Bourguignon, SVP and Head of Europe Business, Network Infrastructure at Nokia, added, “Deutsche Glasfaser has been a customer of our FTTH access equipment since 2017, so this is a valuable seal of approval to extend our partnership to the IP domain.”

    What do enterprises need from an IoT partner?

    0

    Partner video: Nick Earle, CEO of Eseye, talks to Annie Turner about providing global connectivity and other challenges

    In 2010, the industry predicted 50 billion connected devices by 2020, but there were, in fact, only about 11 billion. It turned out that deploying IoT was more complicated than we bargained for, and the business case harder to prove.

    So, what’s different now?

    Mobile Europe’s Annie Turner speaks to Nick Earle, CEO and Chairman of Eseye to explore:

    • What are the top challenges that large enterprises face when designing and deploying IoT globally?
    • Why is the device so important to achieving reliable connectivity?
    • What key attributes and characteristics are large Enterprises looking for when selecting an IoT connectivity partner?
    • How should Enterprises measure return on investment and value from their IoT project?

    To learn more about Eseye, please visit www.eseye.com

    Umniah, Ericsson expand 3G and 4G in Jordan’s cities

    They will extend 4G’s footprint to bring mobile data to more people

    Umniah and Ericsson signed a memorandum of understanding (MoU) to expand 3G and 4G networks in cities in the Hashemite Kingdom of Jordan. The MoU was signed by Faisal Qamhiyah, CEO of Umniah (pictured above left) and Kevin Murphy, VP and Head of Ericsson Levant Countries and Country Manager of Ericsson Jordan (above right).

    Mobile operator Umniah was launched in Jordan in 2005, has 3 million subscribers and is a subsidiary of Beyon Group. It claims to be the best and fastest network in Jordan, with coverage across all the country’s governates.

    This expansion with Ericsson will extend 4G coverage to where there is no 4G currently to bring mobile data services to more people.

    The vendor says its Ericsson Radio System is energy efficient and compact, with a flexible architecture that relies on software to speed the roll out of new capabilities. The portfolio supports multi-band, multi-layer technologies. The implementation will use 10MHz of spectrum including 5MHz refarmed from 3G to 4G and an additional 5MHz acquired for increased network capacity.

    Tecnotree restructures to boost telco offerings 

    The Finnish-Indian BSS software company sheds staff and non-core businesses as it looks to recruit based on customer need

    Tecnotree has announced it has completed its restructure which saw it lose 116 staff in Finland, India and the Middle East. First announced in March, the company also rationalised its non-telco resources in North America as it seeks to optimise its investments and R&D. 

    In North America, Tecnotree terminated its TrustStar product offering to the American mortgage market resulting in savings of €1.5 million in sales, marketing, and infrastructure expenses. The AI-ML resources working on TrustStar have now been refocused to its core telco BSS AI-ML activities globally.  

    Tecnotree said it is also “actively working” on the in-sourcing of long-term contractors, in low-cost jurisdictions near its customers. While this may result in increased headcount, it will impact opex positively by approximately €200,000 in 2024. The opex impact of these efficiencies will result in cost savings of €4.5 million in the year 2024 and further €7 million in the year 2025. 

    “As our Digital Stack achieves maturity in the market, we have been able optimize our investments in R&D spend,” said Tecnotree CEO Padma Ravichander (above). “As the company seeks to grow in subscription driven economies of Middle East, Europe and North America, the company will look to add opex in those regions, subject to order wins.  For existing markets and customers, the need of the hour is near-shoring for effective agile delivery. This will further align our opex to where our revenues are earned.”  

    Middle East becoming important for the company 

    The rationalisation comes at a time when the company has been looking to drive global sales of its digital platform. Last month it reported Q1 revenues of €16.3 million (up 4.7%). The growth was primarily driven by new wins in the LATAM region, while EMEA and APAC regions remain growing markets for the digital platform. Tecnotree secured Nuh-Digital in Brazil, increasing the number of MVNO wins in a new market. Tecnotree Moments platform was selected by Global Hitss to elevate e-health services across Mexico and Latin America, promising streamlined operations across the healthcare value chain. 

    Tecnotree’s order book grew to €74.8 million, and the growth was supported by LATAM, Middle East, Africa, and APAC. The Middle East is becoming increasingly important for the company where it is expanding its offerings. In February, it secured a new multimillion-dollar deal with Jordan telco Umniah, which has around three million subscribers.  

    Under this project, Umniah is upgrading its current CRM system with a new CX implementation targeting improvements in NPS, reduction of churn and an increase in ARPU. As part of the agreement, Tecnotree will deploy its Sensa AI-embedded BSS applications, streamlining the AI development lifecycle and driving faster time to value across products. 

    The applications will include a veritable shopping list of product catalogues, digital self-care and customer management, catalogue-driven order management, demand generation through omnichannel campaigns and configurable integration with leading social networks, lead and funnel management, enterprise workforce management with business process orchestration, business intelligence, and analytics capabilities, enterprise and consumer service request management, partner management, E-Shop (marketplace) & B2B self-care portal.   

    AWS partners Mavenir to co-invest in developing telco cloud 

    The companies claim the partnership will “revolutionise” the deployment of telecom workloads running on AWS

    Amazon Web Services (AWS) has signed a five-year strategic collaboration agreement (SCA) with long-time partner Mavenir that will see the companies jointly architect Mavenir’s technology to streamline the development, testing, integration, and application of cloud-native solutions. They said that together they’ll create “a new telco-grade deployment model that is set to transform how operators launch 5G, IMS (IP Multimedia Subsystem), Radio Access Network (RAN) and future network technologies”. 

    As part of the agreement Mavenir and AWS will co-invest in developing functionalities, such as enhanced dynamic autoscaling, automation, and reliability enhancements, for telcos to enable migration to AWS. 

    “Building on our long-term partnership and individual leadership in cloud technology, Mavenir and AWS are now working together to leverage the key attributes of the public cloud to enable unprecedented adaptability of telco workloads,” said Mavenir EVP and chief technology and strategy officer Bejoy Pankajakshan. “As part of this collaboration, we are combining our strengths to support the ambition of operators to migrate into the public cloud, delivering uniquely optimised solutions that ensure fast and efficient deployment – and a dramatic reduction in total cost of ownership.”  

    Beyond technical collaboration and co-development, the two companies will also focus on accelerating market adoption: “with clear targets set to accelerate market adoption, bringing the value and power of cloud-based network solutions to telcos and their customers across the globe”, according to Mavenir EVP of emerging business and partnerships Aniruddho Basu.  

    Not only a private wireless play  

     In a market that is still evolving, it makes sense for Mavenir to gain wider exposure to AWS’s telco channel to offer its cloud-native and containerised approach to software development. In fact, Mavenir has been a bit of a go-to telco vendor for AWS as the cloud giant has been honing its virtualised network offers with the likes of Dell and VMware. 

    One conspicuous partner, Deutsche Telekom, has combined AWS and Mavenir technology in a few proofs of concept. For example, last October, the telco built a private 5G wireless platform featuring AWS services and infrastructure, VMware’s Telco Cloud Platform, Mavenir radio access network (RAN) and core functions – all based on Open Grid Alliance architecture running on Dell Technologies equipment. 

    As part of the demo, DT showed it could link disparate private 5G SA networks using hardware and software all managed from a single interface. It linked networks in Prague with Seattle (T-Mobile US) via Mavenir 5G core function hosted on AWS infrastructure in Bonn, utilising AWS’s Integrated Private Wireless platform.  

    Mavenir has its own track record with DT as well being part of the telco’s multi-vendor cloud Next Generation IP Multimedia Subsystem (NIMS) which now handles billions of voice minutes a year. 

    At MWC, in a demo, enhancing consumer interaction with generative AI leveraging MCE generative AI, from Nvidia, Vodafone, MCE and Mavenir showcased a consumer journey that included device and network support for latency sensitive applications using AWS Wavelength to achieve the required QoE powered device care. For network support they used Nvidia’s DGX platform on AWS and Mavenir to get root cause analysis of network degradation in real time using AWS generative AI service. 

    Unlock new value and innovation with network digital twins

    Partner content: This approach gives network operators the chance to experiment without the risk of disruption to live infrastructure and services

    The innovation gap between leading cloud players and communication service providers (CSPs) just seems to keep growing. Both provide essential infrastructure for our digital world, and both have millions of users and billions of dollars depending on their technology. Yet this hasn’t stopped hyperscalers from experimenting, exploring new ideas, and bringing new features to users at a breakneck pace. Why can’t CSPs do the same?  

    One big reason is that they’re playing by a different set of rules. Telco networks are highly regulated, often designated as critical infrastructure. In many markets, CSPs are legally barred from experimenting in the live network. Even when they’re not, any network change that impacts services can lead to loss of revenue, hefty fines, regulatory scrutiny, even lawsuits.

    Today, there’s a way for CSPs to expand their ability to innovate without the risk of disrupting live services: digital twins. By maintaining highly accurate virtual models of the network, CSPs can realize a long list of operational benefits. They gain the freedom to experiment in new ways, especially in radio access networks (RANs), the most complex and hard-to-emulate part of the network.

    Most critically for the future, digital twins provide a platform for training a new generation of artificial intelligence and machine learning (AI/ML) algorithms that can empower telco networks to self-optimize – for performance, spectral efficiency, power consumption, and more.

    These changes have the potential to transform telecom, accelerating the cloudification of telco environments and enabling CSPs to tap into groundbreaking innovations from third-party software developers. For CSPs hoping to take advantage of digital twins, however, it’s essential to understand the full value of telco network data—and of maintaining an open network.

    What Is a digital twin?

    Digital twins provide highly accurate virtual models of CSP networks for the lab. Using advanced simulation and emulation tooling, they create a carbon copy of the production network, including real subscriber data that’s been sanitized to remove personally identifying information. As a result, digital twins allow network engineers to validate new applications and changes, and more thoroughly understand how they will behave before pushing them into production.

    Network digital twins are not static; they continually ingest new data to maintain the most accurate possible model, creating a positive feedback loop. As changes occur in the network and CSPs continue feeding more data into digital twins, they develop increasingly higher-resolution models and increasingly accurate predictions. This enables operators to:

    • Accelerate onboarding of new technologies – CSPs can onboard new network functions and applications more quickly and confidently. Engineers can validate that changes will function as expected—and identify when they won’t—before they go live.
    • Improve network performance and security – Digital twins can help CSPs identify emergent issues impacting performance and continually optimize the network. For example, operators can run congestion scenarios to see how cells behave when many subscribers make video calls at once, and take proactive actions to improve performance and resiliency.  Digital twins can also provide a sandbox to simulate network attacks, so operators can understand how compromises would affect the network.
    • Gain a competitive edge – To help show customers that they have the fastest, most reliable network, operators can capture analytical data in digital twins and use speed tests like Ookla to measure their performance versus competitors.

    Digital twins can also reduce reliance on expensive manual operations like drive testing. Today, operators send teams of technicians thousands of kilometers across their markets to directly measure network performance. The cost of these efforts is substantial, but they’re the only way to identify if, for example, a recently erected building is now blocking a cell and impacting performance. With digital twins, engineers can explore how changing morphology and environmental factors affect RF dynamics virtually. They can measure how changes—like moving a cell two degrees in one direction or another—will affect performance, without leaving the lab.

    Why now?

    The concept of digital twins for telecom isn’t new. The ability to test via realistic simulation has been something of a holy grail for the industry for years. Until recently though, it wasn’t viable, simply due to the computational resources needed to construct realistic models. RAN environments in particular are so dynamic and complex, the cost of compute needed to perform accurate simulations of dynamic subscriber mobility use cases and their associated impact on network performance at real-world scale was astronomical.

    Today, declining compute costs, along with advances in network emulation, have made digital twins both practical and economical. These changes couldn’t have come at a better time. As telco environments grow more digitalized and virtualized, even minor fluctuations in the network environment can have a major impact. Overloaded data centers, unexpected congestion, and changing application mix can all affect performance. This makes traditional lab testing, which typically reflects ideal conditions, far less reliable.

    Additionally, as CSPs adopt software-driven architectures, faster testing becomes essential. Instead of processing network updates two or three times per year, updates now come in constantly, some of which (like security patches) must be pushed into production within hours. Without continuous testing under realistic conditions, modern network operations simply won’t work.

    Unleashing AI innovation

    Perhaps the biggest driver for telco digital twins is the exploding use of AI/ML—for service assurance, capacity planning, and most importantly, to automate complex network operations. And the RAN is ground zero for applying AI/ML algorithms to react to changing real-time conditions via automated closed-loop actions.

    Modern Open RAN architectures feature a RAN intelligent controller (RIC), an open platform to run applications in the most dynamic and demanding part of the network, closest to subscribers. We’re already seeing an ecosystem of groundbreaking third-party RIC apps that use algorithmic analysis to optimize spectral efficiency, reduce power consumption, and more. The AI-driven RIC apps of the future will be even more transformative—but only if their algorithms have realistic data to train on.

    It’s one thing to emulate a network. It’s quite another to emulate accurately enough to train algorithms—at least, algorithms that CSPs would trust to take autonomous actions in their networks. In fact, developers working in this space invariably say that their biggest barrier is the ability to test new applications under lifelike network conditions. The second-biggest barrier: lack of telco data sets to train against.

    Looking ahead

    Most CSPs are still in the early stages of developing digital twins. In the future though, they will almost certainly be used across the telco ecosystem for certification and standardized testing. The ability to capture more network data, and provide a means to train algorithms and validate new technology via realistic emulation, will become a foundational CSP requirement.

    At the same time, CSPs shouldn’t view these changes as merely requirements. They also hold enormous potential to unlock new value. If anything, explosive growth in AI/ML training in every industry should underscore just how valuable CSP network data truly is. This assumes, of course, that CSPs maintain an open network, where they can actually access their massively valuable data in the RAN. If CSPs can’t access fine-grained network data, they can’t use or monetize it. And they’ll continue to struggle to optimize their network operations—much less close the innovation gap.

    About the author

    Johannes (Janco) Terblanche is Business Development – OpenRAN/vRAN at VMware by Broadcom

    - Advertisement -