Home Blog Page 218

Is multi-cloud a sensible strategy for the reasons you think?

Annie Turner mulls multi-cloud through the lens of the Pentagon spending up to $9 billion with four cloudcos

The Pentagon put multi-cloud firmly in the spotlight when it awarded contracts collectively worth up to $9 billion for its Joint Warfighting Cloud Capability (JWCC) in December. The JWCC is the multi-cloud successor to the Joint Enterprise Defense Infrastructure (JEDI) – the IT modernisation project awarded solely to Microsoft Azure in 2019 that was supposed to run for 10 years, The remit was to build a massive, common commercial cloud for the Department of Defense (DoD).

The choice of provider was controversial from the start. AWS started legal proceedings almost immediately, claiming the award was influenced by President Trump’s very public dislike of Amazon and its founder Jeff Bezos. Other parties expressed concerns about such a big contract going to a single provider, hence JEDI was officially terminated in July 2022 and a multi-cloud approach taken this time.

JWCC takes over from JEDI

The JWCC contracts went to the world’s largest three cloud providers, Alphabet’s Google, Amazon Web Services (AWS) and Microsoft plus Oracle. The separate contracts will run until 2028 and provide the DoD with “enterprise-wide, globally available cloud services across all security domains and classification levels”, according to the official announcement.

The ideas behind having more than one supplier for government agencies to choose from is that it will help keep prices down and spur innovation. Also, few organisations are more concerned about security, resilience and scale than the DoD and, on the face of it, multi-cloud ticks all those boxes and diversifies risk, but does multi-cloud really deliver?

The failings of failover

Ross Brown, SVP of Partners and Services, at Oracle tweeted in December 2021, when the JWCC was still a mote in the Pentagon’s eye: “Failure is inevitable, planning for it shouldn’t be held back because of an anti-customer strategy to hold their systems hostage by artificially high egress and inter region transfer costs to spur single cloud development models.”

In other words, if one cloud fails, organizations need to have failover to another. What Brown perhaps somewhat disingenuously calls “anti-customer strategy” could also be seen as each cloudco differentiating its offerings with different network architectures and attributes, and varied storage, security and Platform-as-a-Service capabilities. Presumably the government agencies covered by the contract will choose the cloud platform that best meets their needs.

Opponents of failover argue that among other things, it imposes an immense burden on application developers and running everything in parallel just in case is horrifically expensive, time-consuming and wasteful. One approach to make the failover as straightforward as possible would be to keep to the lowest common denominator of cloud offerings, but of course this also minimises the advantages and innovation.

Some industry commentators argue regulators’ enthusiasm for failover is due to poor understanding of how big public cloud platforms work. Gartner’s Distinguished VP Analyst, Lydia Leong, think cloud failover is “almost always a terrible idea” and outlines her reasons in this blog. She uses the analogy of insisting on failover to another cloud like forcing commercial airlines to maintain backup fleets of aircraft from a different manufacturer in case of a software glitch should ground its fleet.

Leong argues, “The huge cost and complexity of a multi-cloud implementation is effectively a negative distraction from what you should actually be doing that would improve your uptime and reduce your risks, which is making your applications resilient to the types of failure that are actually probable.”

Reinforcing dominance

Yulia Gontar, Strategic Growth Executive at Super Protocol, doesn’t think inoperability is the main worry regarding the JWCC, so much as “the threats it may open up.” Super Protocol is built as a massive ecosystem of interoperable solutions and services, with the aim of decentralising cloud computing and giving it back to the community, enabling any party to communicate with any party securely, using confidential computing everywhere (see below).

Gontar says the Pentagon’s contract will reinforce the immense market dominance of the companies involved. Already, the four cloudcos chosen by the Pentagon control over two-thirds of the global market and just two of them – AWS and Microsoft Azure – account for over 60% of it, according to Gartner and others.

On the other hand, we have other US government departments, the European Commission and regulators the world over on a mission to curb Big Tech’s overweening market power because it is seen as a bad thing, stifling competition and innovation. For example, on the day this article was completed, the US Department of Justice sued Google over its dominance of digital advertising and stated its intention to break the company up to counter that dominance.

Central problem of centralisation

As well as so much being in so few hands, there is also the issue of them being centrally controlled. “These large public cloud providers have a lot of servers and data centres distributed all over the world but they are all interconnected to one closed platform…and have some central authority that decides what’s can and cannot be done.”

On the day this article was completed, Microsoft Azure suffered an outage, potentially impacting millions of people around the globe who couldn’t access applications like Teams and Outlook. At the time of writing, it wasn’t clear how many people had been affected, but CNN reported Microsoft had identified a network connectivity issue with devices across its wide area network which affects connectivity between clients on the internet and Azure, and connectivity between services in data centres.

A kick up the breaches

Outages aren’t the only concern about centralisation. Nowadays even the largest data breaches no longer attract the headlines and outrage they used to. Instead, they are regarded as a regrettable but unavoidable a fact of life. Nor are data breaches only caused by cyberattacks. Deliberate data leaks, such as that perpetuated by whistle-blower Edward Snowden, was nothing to do with cloud, but underlines what a sitting duck massive, centralised caches of data can be.

Nor are all leaks deliberate, Gontar points out. In summer 2022, it was reported that details about more than 1 billion Chinese citizens were leaked from the Shanghai Police’s repository on Alibaba cloud, which is part of the Chinese government’s private security network. The cache was offered on a cybercrime forum for 10 Bitcoins, the equivalent then of about $200,000.

Likewise, “The Microsoft data leak in 2022 of was due to the misconfiguration of a server,” she adds. More than 65,000 companies had their data exposed because an endpoint was publicly accessible over the internet without requiring proper authentication. This would seem to undermine a key selling point of cloud; that even if another cloud tenant’s data or other resources are breached, every tenant is insulated from the others.

Yet researchers made a frightening discovery about Microsoft Azure in August 2021, described in Protocol magazine in summer 2022: “They reported gaining access to databases in thousands of customer environments, or tenants, including those of numerous Fortune 500 companies. This was possible because the cloud runs on shared infrastructure – and as it turns out, that can uncover some shared risks that cloud providers thought were solved problems.” And cloud users too.

Fortunately, those who hacked Microsoft’s Cosmos DB service were not cybercriminals, but researchers from Wiz, a cloud security start-up. They called the vulnerability ChaosDB. According to Shir Tamari, Head of Research at Wiz, a cross-tenant flaw like ChaosDB is “the most severe vulnerability that could be found in a cloud service provider”.

So far, there has not been a multi-tenancy cyberattack – or not one that’s been made public – but that could change. A cross-tenant vulnerability was also discovered in Oracle Cloud in September 2022, by some of the same researchers. This weakness would have allowed an attacker to gain read/write access to the disks of other customers. The vulnerability was mostly caused by a lack of permissions verification in an API for storage expansion.

Zero-trust approach

Obviously, security is top of mind for the Pentagon and in 2022, ahead of awarding the $9 billion contracts, the DoD announced in it would adopt a zero-trust strategy, which it defines as an “evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. At its core, ZT assumes no implicit trust is granted to assets or users based solely on their physical or network location.”

ZT relies on general purpose computing, which requires confidential computing as the baseline. Confidential computing is technology that isolates and encrypts data while it is being processed through exclusive control of encryption keys. Data has been protected by encryption when at rest (in storage or databases) or in transit for years, but not during processing or runtime.

Confidential computing makes the data itself, and the tech used to protect it, invisible and unknowable to anything and anybody else, including the cloud provider. It is intended to inspire greater confidence about how well data in the public cloud is protected, but it is not universally available nor uniformly deployed, and lacks standards. Work to address these issues is underway in the Confidential Computing Consortium, but AWS, which has about 40% market share, is conspicuous by its absence.

Confidential computing offers a way to secure data in the public cloud as required by regulations like Europe’s General Data Protection Regulation and the US’ Health Insurance Portability and Accountability Act.

Gontar concedes that the cloudcos awarded the JWCC Pentagon contracts offer confidential computing in a sense already but argues “because they are so large and centralised, with a very long history of developing infrastructure, they would not be able to transform their whole global whole infrastructure into this kind of confidential continuity quickly and it is not yet in place [holistically]”.

She also looks ahead to the potential of the metaverse largely being controlled and run on a handful of platforms and says this “huge scale personal data, which combines the real and the virtual worlds, including data about behaviours of people in a digital environment. This will pose a significant, even larger threat to the people’s privacy and identity if breached.”

Gontar’s view is that the only way to overcome the potential threats of these big trends is to ensure people own their decentralised, digital identities, and indeed governments are moving in that direction, including the US. “They have they understood and are at the stage of piloting decentralised identity projects. If identities are owned by people themselves and are verifiable and trustworthy, then mass attacks will not happen and the national threat would be much lower,” she says.

“Unless you become decentralised and include open source you will be exposed to these vulnerabilities and it’s just a question of time before a data leak happens, accidental or malicious.”

Why trusting nobody is the best option

Gontar is not arguing for private cloud in preference to public cloud because “with private cloud, you still have massive amounts of sensitive data, which is still vulnerable to attack or a leak or
breach without decentralisation”.  Super Protocol’s world view is a trustless and permissionless cloud infrastructure where there is no central control and any party can interoperate with any other party, so long as both sides agree. They just develop their own solution and use decentralised IDs.

In a decentralised, confidential computing cloud, although no-one ‘trusts’ anybody else, parties can work together because they don’t share the data which speeds things up. For example, if you’re a
government agency, wanting to interact with a small business, and a citizen wants to use the services of that small business supplied on behalf of the government, the firm must verify the user is who they claim to be. In a decentralised environment running open source this can be done without recourse
to the government for documentation.

She says, “The whole infrastructure is being developed for everybody, at scale and… is much more advantageous than closed, centralised cloud providers and their markets, and the economies they create and impose on the whole world at the moment.”

Juniper worked wonders for our backbone – VMO2 

Users see a difference

UK fixed and mobile service provider Virgin Media O2 has upgraded its IP core backbone network with Juniper Networks so it can now service 50 million online media connections on an 800 Gbps spine. VMO2 briefed Juniper to build a network that’s 400G-enabled and 800G-ready following a joint venture in 2021 that promised a low cost of ownership and minimal energy consumption.

In the past year, Virgin Media O2 has seen 32% traffic growth on its mobile network, 16% in broadband data downloads and a peak traffic load of 22TB. In the upgrade it successfully migrated all core traffic in its six backbone locations across the UK with Juniper Networks PTX10008 Packet Transport Routers, which are purpose-built to deliver cloud-optimized network transformation at scale and with operational flexibility.

The modular PTX10008 router use less electricity as a result of three main features: front-to-back cooling, flexible power supplies that can use either high or low-power modes and high-density port capability that create a compact form factor.

The it is scalable and yet secure at 400G through its ultra-high port density, native inline MACSec encryption on all ports and a new ASIC invention that powers a range of line cards. They also optical transceivers without any density loss. Virgin Media O2’s infrastructure will be underpinned by Juniper’s single operating system, Junos operating system Evolved, which is designed to create continuity.

“Service providers face a complex set of technical and commercial challenges that can be contradictory, intractable and likely to multiply in the future without the right approach,” said Virgin Media O2 CTO Jeanie York, [the] silicon, software and automation from Juniper that can turn complexity on its head to deliver a simplified, more reliable and more sustainable network foundation.”

Update: Demand for cloud continues to soften for Microsoft

Never mind, the next big platform is AI and the software giant just invested $10 billion

Slowing demand for cloud proved a drag on Microsoft’s Q4 earnings, when for so long Azure drove Microsoft’s growth.  Growth will probably only be up 3% (to 38% before adjustments for currency fluctuations and 31% after them) year on year, according to a forecast for the final quarter of last year ahead of the official Q4 2022 earnings report.

This was a little better than market expectations: Microsoft now expects to have revenue of $50 billion to $51.5 billion in the final quarter of last year.

In Q3, Microsoft missed analysts’ growth expectations and CEO Satya Nadella told an analyst meeting, “In this particular period, I think we are going to optimise for long-term customer loyalty,” quoted in the Financial Times.

Now he acknowledges that customers are optimising the contracts they have in place already rather than looking to increase spending. Last week the company said it would fire 10,000 staff, almost 5% of its workforce, in the face of an adverse economic climate.

Firing tens of thousands of employees is the current fashion in Big Tech but, as a number of commentators have pointed out, the companies’ headcount still remains way above where they were pre-COVID.

Cloud growth might be slowing, but it is still performing way better than Microsoft’s software business, which is suffering as demand for PCs plummet. Overall, its operating profit margin is expected to fall by 1%.

Still, earlier in the week Microsoft said it will make a “multibillion-dollar investment in OpenAI, the company behind the ChatGPT bot. Reportedly, OpenAI was seeking $10 billion from Microsoft on the premise it is worth $29 billion, but details of the deal are not public.

Nadella said in a corporate blog: “We formed our partnership with OpenAI around a shared ambition to responsibly advance cutting-edge AI research. In this next phase of our partnership, developers and organisations across industries will have access to the best AI infrastructure, models and toolchain with Azure to build and run their applications.”

There are lots of upsides to this deal. First, Microsoft is the exclusive cloud provider to the most famous AI company on the planet, even though it’s still fledging. Second, it’s now got a big stake in the most valuable AI start-up which is having to trouble attracting investment, downturn or no. Third Microsoft can add productivity tools to its Office software product suite make AI tools, models and infrastructure available through its cloud offer.

Or as Nadella put it in his comments on Microsoft’s December numbers: “We fundamentally believe the next platform wave will be AI”.

According to the New York Times, Google’s founders, Larry Page and Sergey Brin, have returned to the fold (having left Google in 2019) to support the monolith’s AI strategy. It was rumoured that ChatGPT is causing sleepless nights at the Schloss Google. Although it looks like if the US Department of Justice has its way, the monolith could be broken up. The DoJ launched an anti-trust lawsuit against Google yesterday.

Extraordinary EU powers will get Gigabit done

Euractiv reveals Gigabit Infrastructure Act

Access to public buildings, coordination of civil works, streamlining of permit procedures and single information points are at the centre of the EU executive’s legislation to fast-track the deployment of high-capacity networks like 5G, according to an undated draft obtained by news agency Euractiv. The European Commission is due to present a regulation on measures to reduce the cost of deploying gigabit electronic communications networks, the Gigabit Infrastructure Act. The proposal is a revision of the Broadband Cost Reduction Directive (BCRD), Euractiv’s technology editor Luca Bertuzzi has revealed. 

The Directive was adopted in 2014 to cut costs and expedite the preparation for a high-speed digital infrastructure. However, it was largely ignored or misinterpreted by the member states, prompting the Commission to move toward a regulation that does not need to be transposed into national law. “This regulation aims to facilitate and stimulate the rollout of very high-capacity networks by promoting the joint use of existing physical infrastructure and by enabling a more efficient deployment of new physical infrastructure so that such networks can be rolled out at a lower cost and faster,” the draft says.

The revision of the BCRD was made more urgent by the pressing need to invest in upgrading digital infrastructure to keep up with the growing bandwidth demand. The BCRD introduced the obligation for telecom providers to provide access to physical infrastructure to other operators that are rolling out elements of new communications networks. The definition of network operator has been extended to include providers of physical wireless infrastructure like tower companies. This increasingly present business reality plays a pivotal role in 5G deployment.

The draft Act extends the access right to all infrastructure owned or controlled by public sector bodies functional to the deployment, except for reasons of public security, safety and health, as well as their historical values.

Vodafone did 100 Mbps in built up area, as EE, Three and O2 all speed up

Latest data traffic report from Ookla

A new data traffic report shows that data in the UK is moving progressively faster, but nowhere near the speed limit advertised by the vendors. The latest 5G speed race in the UK was deadlocked with 5G median download speeds impressing across the board, according to tests conducted by RootMetrics, a division of network intelligence company Ookla. The lack of a clear winner emerging is a boost for UK mobile users, claimed Rootmetrics, as 5G availability rose above 40% in the many of UK’s biggest cities. Three UK hit the highest speed, but EE came top in most categories.

The results of the UMPR, based on over 600,000 network tests in 16 major metropolitan area across the UK, gives every mobile operator something to brag about on its marketing. The accolade for the fastest average Median score went to EE, most notable improvement in download speeds was by Three UK, the best uplift in availability was performed by Vodafone and the best text speed was attributed to Virgin Media’s O2 network.

The report findings for each vendor were summarised thus: EE boasts the fastest aggregate median download speed in the UK, with 58.5 Mbps, and ranked first in in all seven UK RootScore categories for the third straight time. Three saw a notable improvement in its overall median download speeds (across all network types–not just 5G) in major cities for the fourth straight test period, while delivering the second-fastest aggregate median download speed in England, Scotland, and Wales.

Vodafone’s 5G availability improved in most of the 16 major metropolitan areas tested but the operator did see speed declines in some of those cities. Despite this, its 5G median download speeds remained above 100 Mbps in the majority of cities tested. Virgin Media O2’s text performance was “especially good”, according to Ookla. The operator won or shared ‘RootScore’ Awards for Text in most UK cities tested and finished second only to EE in UK-wide text performance. 

“Overall, it’s a good day for EE, but if the study introduced some awards for 5G specific categories, then the picture might be a bit different, at least for now,” said independent analyst Mark Jackson atISP Review

The full method RootMetrics’ study can be read in full here.

Neos Networks MANs up four UK cities

Metro Access Network expansion

UK fibre layer Neos Networks has announced the connection of Metro Access Networks for Liverpool, Birmingham and Manchester and completed phase one of its London project. The length of fibre in the ground so far is 3.6km in Liverpool, 7.1km in Birmingham and 13.7km in Manchester. Once the London MAN is complete it will have 37km of high-capacity networking media.

Neos had announced plans to enter the market for last-mile services at the end of 2021, as part a Metro Network Expansion programme targeting key cities. With services now live in all four regions, Neos Networks has brought fibre direct to thousands of businesses across these cities. As of Jan 2023, Neos said it is ahead of schedule on its fourth Access Network build in London with four of a planned eight routes now live. The remaining phase of deployment on track for delivery over the next year. The entire Metro Network Expansion project makes up over 60km of Neos Networks’ national fibre footprint.

With Neos no longer reliant on third-party connectivity across these business hubs it can offer quicker connections at a lower cost of delivery while improving the quality of service. The move gives Neos Networks customers an easier path to upgrade, quicker response for break fixes, and the network characteristics that meet their needs and growth ambitions.

Neos has adapted its products to work over its own Access Tails, which gives it more options to address the connectivity needs of customers in multi-business units (MBUs). New developments in key business districts will benefit from business-grade fibre connectivity to support the adoption of new applications and technologies underpinned by business resiliency, growth and efficiency improvements. 

Neos plans to explore using its Optical Wavelength technology to deliver the right commercial packages for customers, particularly those in MBUs. The new infrastructure will support the data backhaul requirements of mobile network operators as local cell building expands 5G and beyond, according to Sarah Mills, chief revenue officer at Neos Networks. Our Metro Access Network builds in these four key cities will underpin the growth of UK PLC by creating new opportunities for enterprises and driving investment within several new business districts,” said Mills.

Cellnex shares rise on report of takeover by rival

American Tower and Brookfield thought to be considering a bid

Reuters’ report that Spain’s Cellnex could be the target of a takeover by competitor American Tower and asset manager Brookfield, sent its shares up more than 8%.

It cited a Spanish website, Okdiario, which quoted unnamed sources that said Goldman Sachs is advising Cellnex and Morgan Stanley working with the other two.

Prior to the report, Cellnex’s share price had fallen almost 40% over the last year as the number of tower assets left to acquire in Europe fell and rising inflation made debt-leveraged takeovers more expensive.

Eariler this month CEO Tobias Martinez (pictured), the architect of the company’s huge expansion in recent years, said he would step down as the company entered a new phase of consolidation and debt reduction.

Cellnex is the biggest towerco in Europe with more than 100,000 sites across 12 countries.

Telco’s text money is stifled by OTT apps – Juniper Research

Zuckerberg and co are ROFL

Business messages are going ‘over the top’ on WhatsApp while SMS, in text speak, could soon be ADBB (all done bye bye) if current trends persist, says Juniper Research. Over the top (OTT) messaging, as organised by social media companies such as Facebook, has tripled in the last five years, while the use of the Short Messaging Service (SMS) is becoming a dated tradition. The trend represents another movement of revenues out of the hands of telcos and into the hands of tech companies.

In its latest forecast, Juniper predicted that global OTT business messaging traffic will increase from 93 billion messages to 254 billion by 2027, representing a 172% rise. According to the researcher, the growth will be driven by “increased availability of open OTT messaging APIs [application programming interfaces] and competitive pricing models”, which will create “a viable rich media alternative to established operator-led channels, such as SMS”.

Pricing is perceived as the biggest motivator for enterprises to switch from operator-led messaging channels, reported Yanitsa Boyadzhieva, Deputy Editor of TelecomTV. Operator led channels include SMS and rich communication services (RCS), while OTT messaging apps are a broad church comprising many faiths, including Facebook’s Messenger and WhatsApp, Tencent’s WeChat and Rakuten’s Viber. 

For end users, OTT is a cheap alternative to SMS and RCS messaging, while for enterprises, OTT platforms cut costs, since businesses are not being charged per message or conversation from operators. “The volatility in wholesale SMS business messaging pricing provides an opportunity for OTT messaging platforms to grow their revenue, by offering stability for CPaaS [communications-platforms-as-a-service] platforms when negotiating traffic subscriptions with enterprises,” said Juniper Research author Elisha Sudlow-Poole. The pricing model of OTT messaging is identified as being “consistently low” and “far less volatile” compared with SMS, Juniper Research argued.

High-spending enterprises are expected to increasingly use OTT business services for their advanced security, rich media properties and features like brand authentication. The use of strict channel verification systems by OTT messaging service platform operators will be critical in ensuring communications are not prone to “the same fraudulent activity found on channels such as SMS”. 

Telco cloud DevOps culture must be more inclusive – report

It’s about the core but don’t forget the colleagues

The modernisation of telos using DevOps could run into problems unless their roadmaps are shared and the expansion plans are more inclusive, according to a study by DevOps specialist Puppet by Perforce. The software company’s news 2023 State of DevOps reportPlatform Engineering says that rapid progress by DevOps may leave important people behind. It identified team adoption as a step in the right direction.  

Platform engineering is brilliant at moving enterprises out of the middle stages of evolution and into the heights, according to the report. The problem is that too many in, say, a telco get sidelined by rapid and unexplained changes. The security teams don’t have time to patch all the settings for all the new virtual systems being created, often because they don’t have an automated system to keep pace. So, while the DevOps teams are thundering into the cloud on a glory hunt, they are leaving behind their own type of legacy – exposures that hackers can exploit and users that are none the wiser. 

Platform engineering is the discipline of designing and building self-service in order to minimize the cognitive load for developers and speed software delivery. The ethos is inform teams that they must share infrastructure with those internal users who actually create the value, says the report. Typically software developers and engineers colonise a system and treat it as their platform and a product for their users, rather than a group IT project with a shared objective.

“Enterprises with more mature DevOps practices tend to use platform teams,” said Ronan Keenan, Research Director at Puppet by Perforce, “This doesn’t mean you must adopt a platform team model to be good at DevOps. Rather, it’s that a platform team is a well-defined and proven path to succeed with DevOps at scale.”

Platform engineering can produce meaningful benefits across an entire organization and unlock DevOps success for the enterprise, according to Nigel Kersten, CTO, Puppet by Perforce. In order to achieve this, they have to create an inclusive culture, continuously investing in the platform team, connecting with users through functional feedback loops and constantly evolving the product management skills in the team. Only then will they have a hope of creating the “fast flow delivery and an ongoing reduction in cognitive load for developers,” said Kersten, who is also co-author of the State of DevOps Report. “As firms beef up hiring for platform teams, they must prioritise product management skills, not just core engineering.” 

The emergence of Platform engineering has been welcomed by Sabrina Battiston, Community Lead for Women in DevOps.“We are looking forward to taking ownership of DevOps practices and taking [these findings] to their respective organizations,” said Battiston. 

Last week Blue Planet, an independent network software division of equipment maker Ciena, outlined the dilemma for telcos in a briefing to journalists. The race to create 5G networks forces mobile network operators to ditch their legacy static inventory and adopt dynamic inventory systems, said Kailem Anderson, VP of Portfolio and Engineering at Blue Planet. Moving to the rapid tempo of 5G, edge and multiple-clouds involves adopting all kinds of new codes, said Anderson. 

To the Future! Predictions for 2023 in the Telco Industry – eBook from Comarch

0

The future is a difficult topic to study. Especially in the telecommunications industry, it is difficult to predict what will be on top in the coming months. It takes deep knowledge and years of professional experience to be able to determine the shape of upcoming changes. Comarch has been supporting telecoms with its experience for years, and helps in the efficient implementation of changes.

Comarch has prepared a free eBook with 11 predictions for 2023. Experts analyzed the current shape of the market, collected data on new technologies, and shared their observations. The result is a book that is a guide to the telco world of the future. Deepening relationships based on 5G, placing data centers in space, and using AI in sales departments are just some of the many changes that await us in the coming months.

Sustainable development definitely stands out among the trends. Many telecoms have slowly started to implement solutions in accordance with this idea, but are you sure all the actions taken are accurate? How can you  make green choices and not fall into the greenwashing trap?

It is also impossible to pass by OSS systems, which should become more mobile and keep up with cultural changes. Nowadays, most things can be done from a smartphone. This trend will continue to grow in 2023 as well.

Want to be prepared for all the challenges that await your business? Download the free eBook and look into the future.

- Advertisement -
DOWNLOAD OUR NEW REPORT

5G Advanced

Will 5G’s second wave deliver value?