More
    HomeAutomation/AIGenAI’s missing link means it cannot reason – and millions of jobs...

    GenAI’s missing link means it cannot reason – and millions of jobs are safe

    -

    Dr Richard Windsor of Radio Free Mobile slices through the hype, highlighting the importance of causal understanding, which LLMs lack

    In his latest blog, Dr Richard Windsor, Founder and of Radio Free Mobile (RFM) is not impressed by Meta and OpenAI’s claims that their next models will be able to reason.

    If the claims were true, it would be a huge step towards “super-intelligent machines”. As it is, he reckons the models will simulate reasoning, like all those models that have gone before, which is not the same thing at all.

    Causal vs correlation

    For the last six years RFM Research has argued that the main limitation of AI systems is that they are based on deep learning and have no causal understanding (perception of cause and effect).

    Rather, they are all sophisticated pattern recognition systems that can identify correlation but cannot decide whether a relationship is causal or simply correlated.

    This flaw affects all kinds of AI systems, from the simplest neural networks to the biggest large language models (LLMs).

    Despite this, many AI actors claimed the current crop of LLMs have the power to reason, yet there is much fanfare about their successors having that capacity.

    The Financial Times [subscription needed) reported on the claims for the next generation of AI engines. Dr Windsor thinks Meta’s commentary cited in the article is more realistic than that of OpenAI’s in the said article because it expresses aspiration.

    Lies, damn lies and statistics?

    On the other hand, he thinks OpenAI seems to imply that it can implement reasoning via a purely statistical-based system and thinks this unlikely on the grounds that, “as the only way I know of achieving reasoning in computer systems is to use the rules-based software that has been around for years.

    “The reason why a cheap pocket calculator is better at maths than a multi-billion dollar LLM is because it is programmed with the rules of maths whereas the LLM uses statistics and probability to achieve its results.

    “In short, software can reason but it can’t learn whereas deep learning can learn but it can’t reason.”

    LLM + software =

    If we accept this statement as being true, then, Dr Windsor argues, “it stands to reason that if one were to combine software with an LLM, this would represent an effective and efficient method of implementing reasoning in these systems.”

    In case you were wondering, this is referred to as neuro-symbolic AI, Dr Windsor explains – “symbolic is a fancy term for software” and “was a relatively active area of AI research 4 years ago (see here) but it has been pretty quiet since all the focus moved to LLMs”.

    So “Unless OpenAI is referring to using software in its LLMs to provide the reasoning, then I am certain that GPT-5, LlaMa 3 and so on will be as incapable of reasoning as all of their predecessors,” the doctor says.

    He agrees these systems are “very good at simulating reasoning” but given real empirical reasoning tests, “they fail and fail convincingly”.

    Not the end of civilisation

    And concludes that “despite the chatter and the hyperbole, I continue to think that we remain as far away from super-intelligent machines and artificial general intelligence as we have ever been”.

    This is important for a number of reasons. Perhaps most importantly to the biggest number of people, “hundreds of millions of jobs are safe for the foreseeable future and that the robots are not coming to kill us yet”.

    “We are in an AI bubble and at some point, everyone will realise what this new branch of AI can realistically do and what it is just noise,” Dr Windsor says.

    See also AI and reality checks – why some big numbers don’t add up