Dr Kim (Kyllesbech) Larsen, CTIO and board member at T-Mobile Netherlands talks about his approach to network automation, its role in the operator’s notable success so far, and his doubts about network slicing.
Dr Kim (Kyllesbech) Larsen has worked at the Deutsche Telekom group for many years, not continuously, in a variety of technology roles and countries. He’s a physicist-mathematician with a PhD in physics, and widely known as Dr Kim. He became CTIO of T-Mobile Netherlands in January 2019, after the operator acquired Tele2 Netherlands at the end of 2018. He is also a board member and an executive advisor on technology, economics and strategy.
This article orginally appeared on FutureNet World and is reproduced here by kind permission.
Since Dr Kim joined T-Mobile Netherlands, the operator has increased its mobile market share from 25% in 2017 to 42% (based on the number of SIMs) in 2020 and become market leader. It entered the fixed market in 2016 with the acquisition of Thuis, strengthening its position through a strategic partnership with Open Dutch Fiber (owned by KKR and DTCP) in April 2021.
T-Mobile Netherlands was acquired by a consortium of Apax and Warburg Pincus in September 2021. Its new owners described it as “Europe’s fastest-growing mobile network”.
Motivation for automation
Dr. Kim’s network automation journey began in 2014 when he and his counterparts across the industry started thinking about what future network would look like, as network functions virtualization (NFV) and software-defined networks (SDN) were underway.
He stresses that work on network automation is not because it’s “cool technology”, but “comes from a wish to continuously improve efficiency, handle complexity and to use automatic detection or machine learning (ML) algorithms in general, to improve the network’s operational excellence and customer experience, which is part of our Magenta DNA.”
No surprise then that after T-Mobile Netherland acquired Tele2 and “a lot of legacy”, he quickly introduced anomaly detection loops to identify modems that weren’t working and poor quality at customers’ homes. “Sometimes we could reset the modem, sometimes we asked the customer to do it, but we closed the loop,” he says.
Increasingly anomaly detection is “part of how we catch network anomalies that otherwise would lead to a bad outcome for our customers. We are still far away from the self-detect endgame, but we have worked closely with the likes of Anodot to set down frameworks in our networks – mobile and fixed – that allows early detection of what we call emerging incidents,” Dr Kim adds.
Around 2015, Dr Kim and his cohort also began to realise that some aspects of cloud, already familiar in IT, would come into play in Network and that the trend will expand and accelerate as 5G Standalone proliferates. Hence since 2018, T-Mobile Netherlands has been through an IT transformation, starting with less than 10% of cloudified application platforms to having more than 90% in the cloud, although most are not containerised.
He explains, “The lift and shift of legacy network components into the cloud gives a boost from cloudification to some extent. The next step is moving to 5G Standalone and we will be going cloud native with all the automation frameworks there.”
Dr Kim thinks the importance of AI and ML are hyped at the expense of other aspects of automation: “I’ve been a big proponent of AI and ML in network operations but I also always want to put a damper on it because there are a lot of automation frameworks in the cloud-native environment that have nothing to do with AI and ML which I think are sort of the cherry on the cake,” he says.
“You can combine all kinds of different platforms within the different containers, and you can keep it all in the same computing centre. From an efficiency perspective, it makes a lot of sense. It’s not like the old physical hardware…now you unbundle it, you take the proprietary hardware away and you put commercial off-the-shelf hardware there instead and should work.”
Dr Kim thinks NFV was an essential step. For one thing, the core network technology has a lifespan of 10 to 15 years, but also, as he points out, “In 2014, if anybody had said we would want to use a cloud-native framework for our applications and network, a lot of people would’ve said, ‘It will never happen – it’s IT stuff, you cannot trust it’. This is because IT had a completely different view of service levels and availability than we had at Network then.
“Nowadays IT systems have to be 24×7 and within the cloud environment, and we started to understand there are other ways of solving availability and reliability, instead of duplicating hardware in multiple redundancies. You can try other, better ways of getting the same result.”
Not everyone is happy about moving telecoms network to public clouds. Dr Kim comments, “The FBI, NASA and US defense run a lot a lot of mission-critical applications on public clouds. I would say the hyperscalers – AWS, Azure, Google and the rest – have far more experience in and know-how about cybersecurity: their security is far thicker and more mature than many telcos’. We’ve always cared about it, but we need to care even more.”
He adds, “Most of our IT applications are running on AWS with some mission-critical ones on Azure and one or two on private cloud, and they run very well. We have the right service levels; we feel very comfortable about the security around us. They meet our contractual agreements, but in some instances, we run things in our own environment and certainly we keep customers’ data away from the public cloud.”
As he predicts, “In the beginning, we would not have put network functionality in a public cloud domain. We’d have tried to ring fence that probably within our own copy of a public cloud in our own environment, but the time will come maybe when even that border will be broken down and we will trust hyperscalers’ public clouds even more than today.”
He concedes, “Engineers’ view is that ‘trust is good, but control is better’, so they like a top-down approach. This is why we see a lot of architectures with the ‘heavy-handed’ orchestrator on top of the network stack. Not being an engineer, I like to keep top-down control to a bare minimum and rely much more on layer-by-layer autonomy with APIs between them. Microservices can be taken up while working to keep orchestration local rather than having a ‘Big Guy’ orchestrator controlling everything from the top-down. Autonomous layers are where we’re moving to over the next 10 to 20 years.”
Dr Kim is a fan of disaggregation, explaining, “The world will be much more dynamic in terms of adding functionality and applications to the cloud landscape that will support 5G Standalone, and then you will not be tied to one or a few suppliers.” He adds, “We say we want more independence from the traditional suppliers, but I think you can never be completely independent because there is a minimum set of applications and functionalities that makes more sense to get from one supplier, not from three or four or five.”
He is hopeful that many more “dynamic new companies will come on the scene to provide applications, piggybacking on hyperscalers’ public cloud infrastructure. We need them to help us get cloud network functionalities to work through new, innovative or cost-efficient standard applications and to help us close the loop across our network layers and telco stack.”
Contradiction in terms
Cloudification – the basis of 5G Standalone – is all about scale, yet as Dr Kim points out, “In the past, when we deployed a new core network, it was a Big Bang launch and all the customers were on it. The network had to work flawlessly, from the very beginning. 5G Standalone is super scalable [but] by design it allows you to start running small workloads, learn from that and develop it as we go along, such as working with different verticals.”
One aspect of 5G SA that has had a lot of airtime is network slicing. Dr Kim has issues with both the economics and network resource planning aspects, despite his immense respect for one of its chief architects. He note wryly, “My blog about 5G SA network slicing seems to have resonated with a lot of people but we will still have to use it.”
Dr Kim asks, “What are we trying to solve [with network slicing]? Our mobile networks are super versatile. The impression that a modern cellular network can only deliver ‘one make of black car’ is plain wrong and misleading. For example, I can differentiate speeds and have very sophisticated, rich QoS mechanisms to my disposal (which in my opinion are much more impactful in 5G SA) to achieve what we need to do: I can set up VPN-like connections without slicing.
“There are other ways to solve this. Imagine having thousands and thousands of slices serving different needs and maybe even in real time – in my opinion, you become increasingly inefficient with your network resources than if you had to optimise across the whole network.”
Dr Kim reflects that the ideas for the 3GPP standards “came out at a time when resources were scarce – when our networks could deliver a car in maybe one, two or three colours – and we believed that vertical segments would really be the huge revenue generator for 5G and ‘save our industry’. That might happen, but if you look at revenue share B2B versus revenue share B2C, I don’t think it will change that much. I truly believe we could have done this in other ways – it looks like a cool engineering thing to do.”
A note of caution
Dr Kim says, “The last thing I want to get across about automation is to ask what does it bring? In general, unless you are a big classical telco incumbent, I don’t expect automation to bring a lot of efficiencies, but it will bring cost avoidance, saving on manual effort. We want automation to manage the complexity that we are incurring more and more in a modern telco network which is increasingly heterogeneous in nature. We are operating different generations of mobile networks, home networks, campus networks, fixed broadband networks – and they are all becoming more and more integrated.
Similarly on the transport side, architecturally, we want fixed and mobile to merge allowing us to save cost and increase efficiency. Automation itself will not bring huge cost savings according to the economic models I’ve been looking at. It improves how we deal with complexity and takes away some risk around skills because if it works, we will need fewer critical, scarce engineering skills.”