Notes from Nell Watson
The potential threat from rogue AI has been extensively discussed in media for decades. More recently, luminaries such as Hawking and Musk, have described AI safety as the most pressing problem of the 21st Century.
Although we are far from producing Artificial General Intelligence (AI of equivalent or greater functional intelligence to a human), lesser intelligences (autonomous systems) are today interacting with us ever more closely in our daily lives.
From Siri to Self-driving vehicles, and marketing bots, autonomous systems are becoming an indispensable tool within daily life. They are already being deployed in the world of business, for scheduling meetings and facilitating commerce, as well as within potential life-and-death situations on the road.
Any agent that interfaces with legal and contractual affairs needs to be explicitly above-board, and to act in accordance with generally-accepted business ethics and common customs and best practices. Only once this information layer becomes available can machine assistants be trusted to take care of sensitive, nuanced, or potentially high-liability tasks with any autonomy.
AI systems need to function according to values that appropriately align with human needs and objectives in order to function within serious roles in our society. Any activity that involves human and machine interaction or collaboration will require a range of methods of value alignment.
At present, other than OpenEth’s prototype, there is no obvious way to implement ethical rules in a way that machines can understand, or could apply to governing their operations across a range of situations.
Recent developments in AI, including Bayesian Probabilistic Learning, offer a glimpse at a new generation of AI that is able to conceptualise in a way previously impossible. This heralds the first generation of AI assistants that can learn about our world, and the people in it, in a manner that is similar to how human beings learn.
This ability to learn from few examples, whilst conceptualising discrete ‘ideas’ means that an era of truly cognitive machines is coming, one much more sophisticated than the intuitive forms of machine intelligence born from deep neural nets. Northwestern U’s Cogsketch can now solve the Raven Progressive Matrices Test, an intelligence test of visual, analogical, and relational reasoning, better than the average American.
Many of us have experienced times when our children ask us very difficult questions about life, existence, and the various assumptions that in aggregate form modern civilisation. Humanity must prepare itself for the tough task of being asked similar questions from increasingly intelligent machines.