More
    Home5G & BeyondAI and reality checks – why some big numbers don't add up

    AI and reality checks – why some big numbers don’t add up

    -

    As Sam Altman strives to raise up to $7trn in his bid to reshape the world, cooler heads have some sound advice, including for telecom

    Last Friday’s Wall Street Journal reported that Sam Altman, co-founder and CEO of OpenAI, is in the United Arab Emirates as part of his quest to raise up to $7 trillion. Altman’s company created ChatGPT and now he wants to reshape the chip and the AI industries, freeing them of constraints such as the shortage of silicon that can run large language models (LLMs).

    According to The Information, [subscription needed] this reshaping and upscaling is needed to run algorithms that could control, for example, users’ smartphones and other devices to run automated tasks for individuals and corporations.

    Dr Richard Windsor, founder and owner of Radio Free Mobile is keen to insert some reality into this artificial universe. In this blog he notes, “Mr Altman’s agenda continues to defy gravity and reality”. He suggests Altman’s aim are as “outlandish” as predictions for the internet in 2000 and autonomous driving in 2017.

    The odd trillion between friends

    On the subject of raising $6 trillion or so, Windsor points out that the capital expenditure of the entire semiconductor and data centre industry is around $275 billion per year. Put another way, the sums Altman is seeking to raise are approximately equivalent to 22 years’ worth of those industries’ capex.

    Then there’s the matter of returns on investment. Those who invest in start-ups expect much higher returns due to the high level of risk. Windsor says a venture capitalist would usually look for a return of 10 to 20 times their early-stage investment.

    In which case, Altman would need to be promising returns in the order of $60 trillion which is “1.5x the entire printed money supply of the world, 25% of the wealth of the whole planet and 10x the market cap of Apple and Microsoft combined,” according to Windsor. In other words, not remotely viable.

    Controlling devices

    Windsor has a similarly deflating effect on Altman’s ambitions for AI controlling devices and automating daily digital lifestyle and corporate tasks. Altman thinks this can be accomplished through a “super assistant” that resides on devices.

    For example, AI choosing an individual’s photo of the day, posting it on social media channels with a commentary without the person doing anything.

    Windsor comments, “This would be feasible if the large language models (LLMs) actually understood what it is that they are doing but the reality is that they are simply sophisticated pattern recognition systems and nothing more.

    “Furthermore, these machines have been shown to leak data and giving them unfettered access to personal and corporate systems and accounts is a gold mine of a hack waiting to happen.

    “These machines are unable to draw pictures of rooms without elephants or tell the time properly or reason anything that is outside of what they have been explicitly taught.

    “Consequently, I think the idea that rational humans and corporations are going to be willing to put them in charge of their digital lives and corporate operations does not hold water.”

    The trend will be smaller

    Interestingly, Richard Benjamins, Chief Responsible AI Officer at Telefonica, recently said in his keynote at Mobile Europe’s virtual 5G and Beyond event, “Personally, in my strategic opinion, I don’t think it makes sense [for] one telecommunications company to build a large language model similar to what OpenAI or Meta or Amazon or Microsoft are doing.”

    He acknowledged that one option might be for the sector to build its own open-source LLM, but thinks smaller models will win out: “Whereas frankly, the big ones now have large language models that cover any area, I’m pretty sure that the trend will be ‘let’s make them smaller’, so they consume less energy, it’s cheaper to train them, it’s cheaper to maintain them and ‘let’s them focus on the telco sector, media sector, legal sector, public administration or whatever’.

    “I think those things will happen in the near future…what I would suggest at the moment…is just explore. Try whatever you can to get as much experience as you can. But don’t invest – bet on one thing specifically because in six months, the world may be completely different. There will be a time when things will become more stable. Then it’s time to standardise, to make a really a strategic decision about how to move forward.”

    WATCH THE VIDEO OF RICHARD BENJAMINS’ SESSION AT 5G AND BEYOND HERE

    LLMs in telecoms

    In a new report Juniper Research predicts increased use of cellular networks by enterprises will drive investment in AI for applications like smart manufacturing and yes, autonomous vehicles. Specifically, investment in AI will be needed to automate key network processes.

    Such use cases need high throughput, low latency and geographical coverage, while operators look to maximise network efficiency and reduce operational expenditure Juniper says. It urges telcos to speed up the incorporation of AI into core networks.

    The report reckons optimising network performance and network security will be essential going forward, accounting for more than 50% of overall global operator spend on AI by 2028.

    Lower expectations, lower returns

    All of which sounds more grounded and sensible compared to Altman’s boiling the ocean approach. Windsor points out that to secure the returns early investors will seek, huge productivity gains for individuals and corporations, as outlined above, would be required.

    Windsor concludes, “The problem is that the hype has gotten to such a level that this is now the degree that one needs to go to be seen as new or different as opposed to more of the same”.

    Also, as the number of services based on LLMs grows, the price “will fall precipitously” and once investors do not achieve the returns they expected, reality will assert itself. But for now, we’re at peak AI.

    Windsor stresses that LLMs – unlike during the autonomous car bubble – can generate good revenues and profits now: “The key here is just that LLMs will generate much less of both than anyone thinks”.

    The machines cannot reason but they can and do hallucinate and “make obvious mistakes at a frightening rate obviating them from most of the use cases currently being touted”.

    Windsor ends, “I suspect that one will be able to acquire AI engineers and assets will be able to get much better prices over the next 12 – 18 months”.