At its AX (Analytics Experience) event in Milan, SAS COO and CTO Oliver Schabenberger laid out a differentiation between different forms of AI. His reasoning was straightforward, constructive and much less drama-inducing than Elon Musk’s many attacks on AI as the great the threat to humanity.
He started, engagingly, by confessing that he was like everybody else these days, an expert on AI. This, he indicated was, like everybody else, he has ‘read a book’ and ‘can apply intelligence’ and waiting to download the ‘AI App’ (whatever that might be or mean).
He continued by asserting that AI is not hype. It is already in use in many organisations, with SAS customers using SAS AI and analytics tools. That said, he proceeded to differentiate between AGI and ANI.
Artificial General Intelligence (AGI), for him, is the pursuit of making machines behave, think and be like humans. It is to attempt to realise human intelligence in hardware and software.
AGI is not about applying intelligence to specific problems. Rather it is about creating a machine intelligence capable of doing anything, including learning and developing, much as a person can do. Probably to Mr Musk’s relief, Dr Schabenberger opined that we are far, far from AGI.
To achieve AGI will require a step change that is decades away. That step change will probably require a fusion of multiple sciences – from biology and physiology to chemistry, physics, social sciences, medicine and neuroscience (to mention only a few).
The basic problem is simple. Today we still do not understand how the human brain works. Furthermore, there is no reason to think we are anywhere near to reaching an understanding. AGI can only come from radically new ways of thinking, one which build on a comprehension of brain processes which simply does not exist today.
The ‘threat’, therefore, of AGI, of machines learning from each other and determining their own futures (possibly at the expense of an inconvenient mankind) is not imminent. The futurists predicting doom and gloom with the eclipse of humanity can fantasise in science or dystopian fiction. But that does not make AGI any more real.
In contrast, ANI is Artificial Narrow Intelligence. ANI is about algorithms, about rules. These rules embody a narrow understanding of a specific problem to which automation can apply.
The key to ANI is that humans are its gestator. Humans, via their systems, create the huge stores of data upon which ANI depends to operate to become useful. Equally, the creation of these algorithms does not come from machines. It comes from people seeking greater understanding of what lies in the vast datasets which exist today.
While ANI can refine its understanding and outputs – machine learning, for example – this is nowhere near AGI. In essence, ANI is not self-sufficient in the ways feared by the dystopians.
A different way of considering ANI is to envisage it as excellent for programmatic-based transformation. This takes morasses of data, finds patterns (according to an algorithm or set of rules) and makes this available for use. Well known examples include medicine and fraud detection.
Hospitals have huge amounts of data collected about patients. But this is scattered across many sources and rarely does anyone look for patterns across multiple patients which might predict, earlier than is possible today, conditions like imminent heart attacks, oncoming sepsis or whatever. AX introduced a Dr Vijlbrief of UMC Utrecht who is working with SAS to catch sepsis in premature babies before it develops into a full-blown assault – by absorbing all the patient data collected and then looking for any patterns which might predict infection before it catches hold.
Similarly, in fraud detection, the art is in applying criteria to massive datasets to identify common characteristics across previous fraud instances. These criteria can the apply to prevent future fraudulent transactions completing.
These are just two examples of ANI in use today. By definition they are narrow in scope, which is not to say the consequence are narrow. Sky, the ‘broadcaster’, is using ANI to improve customer retention and win-back rates. This contributes significantly to the relevance of marketing activities, as well as bottom line profitability.
Enterprise Times: what does this mean
Besides establishing a way to differentiate AI, from AGI dispensing with mankind (not to be feared in the near or medium term) and ANI’s more immediate applications, Dr Schabenberger discussed the impact on employment and people. He suggested AI would have an impact on near to 100% of jobs.
But, he argued, when executed well, jobs should be a net beneficiary. If ANI can take over the routine and tiresome it leaves individuals with much more scope to focus on what rewards them and which creates value. In medicine, it may come to be that ANI will diagnose and prescribe what to do. But patients still need care, provided to people by other people.
Yet, Dr Schabenberger added one caution. Systems, especially ANI, have run far ahead of the ethical and moral infrastructure necessary to ensure AI is productive. One simple instance exemplifies this. Autonomous cars. Who decides whether a car swerves to avoid a child on a bicycle and instead collides with an invalid or retiree?
At present, analysts and developers are the ones ‘encoding’ the moral/ethical rules values. But these need agreement via some form of social consensus. If this does not happen, there will be – in Dr Schabenberger’s words – “unintended consequences (emerging) from AI algorithms” that have run ahead of people.
This is not AGI. It is imperfectly thought-through ANI, something which he – as SAS’s COO/CTO – worries about, and believes SAS customers should do so as well.