More
    HomeAutomation/AIIf I’d had longer, it would have been shorter…

    If I’d had longer, it would have been shorter…

    -

    BT’s Neil McRae on why simpler coding is undermining Moore’s Law and how to fix it

    In a world where low code and no code is the mantra, I was intrigued by Neil McRae, BT’s Chief Architect. mentioning an MIT paper concerning Moore’s Law, applications and silicon in his most recent keynote at Mobile Europe’s Technology and Strategy Conference. I asked him to elaborate.

    Here’s Neil in his own [lightly edited] words:

    I’m a kind of a silicon geek, an electronics geek. When I was kind of four years old, if I wasn’t writing code, I was burning myself with a soldering iron…I am fascinated about how we build integrated circuits; as you know I’m a big space fan and that’s where a lot of silicon development started.

    But we’re seeing that it’s getting harder to achieve the things Moore’s Law says, which is that over a period of time, we double the number of transistors in chips. When you’re looking at it from a generic point of view, where you’re trying to capture things on a chip that are going to serve a really wide base of customers,… it’s getting harder to add value. If you look at most of the code out there that was written in say the last five years, it’s written languages that are not CPU friendly.

    I’m a C programmer, by trade, it’s what I grew up on and I still try to write most of what I do in C, which kind of scares the hell out of a lot of people but I know that I’m going to get a really well executed application. We have a lot more kind of what I call simplified languages now that do great, great things, but they cause us to use an immense amount of CPU capacity and the MIT paper points out two things.

    One, if we optimise the silicon for specific use cases then Moore’s Law will go on for a long time. On the flip side, if we get better at writing software, Moore’s Law will also go on for a long time. Platforms are built using these less efficient programming languages with people expecting the next chipset to give them a performance boost. The likelihood it isn’t going to happen.

    Looking at how you’ve written your code, looking at the algorithms you use and optimising them is going to become much more important in the future. And as someone who has long held this belief it was really exciting to see this paper talk about it. We see some of this today; we have optimised CPUs going into smartphones and Open RAN it’s not all about software.

    We’ve got accelerators that are unique to Open RAN  going into some of the bigger radios and digital signal processors and cohere optical is one of the most CPU-intensive workloads in the network. So we see some of this in telco already, but I think we’re going to see it in many other space across the technology landscape. We’re going to see this more optimised silicon everywhere is my take and what the paper points to.

    We have many amazing programmers in the world creating amazing applications, but we’ve abstracted all the hardware and details so much, that the person programming perhaps doesn’t understand the impact of what they’re writing. If you write something badly, the CPU makes up for it by more and more processing.

    There’s a number of things happening when the CPU is getting hot: you’re using more energy and more cooling in a world where energy and cooling are becoming ever more important. I’m not sure the way we’ve historically created some of these applications is the future. We need to get smarter about that. The good news is if we are smarter about it, there is as a massive prize terms of fewer CPU cycles, less energy consumption and many other things. These are things we’ve got in mind at BT also.

    I found this so interesting because while I don’t know anything about coding, I do know that when you’re writing, it takes a lot longer to write shorter and more precisely, and it sounds like the same applies.