Page content

Ulster University Researchers have developed the world’s first biologically motivated, computational model that can quantify decision uncertainty and explain its effects on change-of-mind during decision-making processing. This could potentially be used by developers to create self-aware machines.

At the recent world economic forum, of the 56 top technology pioneers, at least 20 companies are using some form of artificial intelligence (A.I.) to tackle challenges in fields such as advertising technology, smart cities, clean tech, supply chain, manufacturing, cybersecurity, healthcare, autonomous vehicles, and drones. A.I. systems encompass machine learning techniques and powerful computational resources to create predictive models through the processing of complex and large data. However, critically, they lack a key component that is essential to human intelligence and effective decision making: self-awareness.

At the Intelligent Systems Research Centre (ISRC) at Ulster University’s Magee campus, pioneering research on biologically-inspired algorithms have been advancing beyond standard A.I. algorithms. In particular, this exciting new work in Computational Neuroscience has shown for the first time that neural network models can be equipped with metacognition or self-awareness of their own actions and choices. The computer model can not only mimic brain activity observed in humans and some animals, but also replicate change-of-mind and error correction behaviour, which require “on-the-fly” metacognitive processing.

This research has recently been published in the prestigious journal Nature Communications

The senior author and researcher of the research work, Dr KongFatt Wong-Lin, commented:

“Our research has revealed the plausible brain circuit mechanisms underlying how we calculate decision uncertainty, which could in turn influence or bias our actions, such as change-of-mind. Further, given the wide applications of artificial neural networks in A.I., we are perhaps closer than ever before to creating self-aware machines than we have previously thought. Real-time monitoring of decision confidence in artificial neural networks could also potentially allow better interpretability of the decisions and actions made by these algorithms, thereby leading to more responsible and trustworthy A.I.”

Mr. Nadim Atiya, lead author of the paper and a PhD researcher at the ISRC, added:

“Our research work could also form the basis towards understanding brain disorders such as obsessive-compulsive disorder (OCD) and addiction, in which metacognitive abilities are impaired, linking Computational Neuroscience to Psychiatry; a nascent research area named Computational Psychiatry.”

Dr. Wong-Lin further outlined ongoing work:

“We are now working with cognitive scientists and brain scientists to further develop our computer model, while creating conscious machines that are self-aware of their actions and decisions consequently making A.I. and machines much more intelligent and interpretable.”

Other contributors of the research work are Prof. Girijesh Prasad and Dr. Iñaki Rañó, both from the ISRC. Dr. Rañó is currently at the University of Southern Denmark.