Page content

Within the intelligent systems and research centre we have a variety of research areas that are being studied.


This is an EU-funded project to create a refined understanding of retinal function in natural visual environments by examining the unique role that non-standard retinal ganglion cells play in dynamic visual processes.

VISUALISE will combine the efforts of physiologists, computational neuroscientists, neuromorphic electronic engineers, and roboticists, to build novel theoretical and hardware models of biological retinal ganglion cell types for dynamic vision applications such as robotic navigation or pursuit.

The VISUALISE project aims to solve a number of related problems previously ignored in existing retina processing models, namely:

  • Investigating the dynamics of non-standard type retinal ganglion cells with natural scenes.
  • Identifying the nonlinear transformations from natural scenes to non-standard type ganglion cell response and associated computational models, including the influence of latency on encoding.
  • Statistical analysis of retinal ganglion cell population coding response to natural stimuli including measuring how the cells interact.
  • Software and hardware emulation of an event-based bio-inspired artificial retina that captures the dynamics and adaptive nature of both individual neurons and neuronal populations and their precisely-timed and correlated spiking output.
  • Experimental study, application and evaluation of the bio-inspired artificial retina under challenging visual conditions in a robotic predator-prey scenario.


The retina is an extension of the brain and formed embryonically from neural tissue and connected to the brain by the optic nerve. The retina is the only source of visual information to the brain and a uniquely accessible part of the brain suitable for investigating neural coding.

The VISUALISE consortium will achieve its aims by (1) recording the activities of vertebrate retinal ganglion cells using multi-electrode arrays under dynamic natural stimulation, (2) analysing the functional response properties to expose new principles of spike encoding that bridge the gap between single cell and population information processing, (3) exploiting these principles in multi-scale mathematical models which permit efficient digital circuit implementations for a next generation of real-time event-based vision sensors, and (4) evaluating their effectiveness in a challenging predator-prey high-speed robot scenario.

VISUALISE is a three-year project funded under the European Union Seventh Framework Programme (FP7-ICT-2011.9.11) under grant agreement No. 600954 ("VISUALISE"). The project commenced in April 2013.

For further information and contact details, please visit the project website at


Biological neural systems are powerful, robust and highly adaptive computational entities that outperform conventional computers in almost all aspects of sensory-motor integration. Despite dramatic progress in information technology, there is a big performance discrepancy between artificial computational systems and brains in seemingly simple orientation and navigation tasks. In fact, no system exists that can faithfully reproduce the rich behavioural repertoire of the tiny worm Caenorhabditis elegans which features one of the simplest nervous systems in nature made of 302 neurons and about 8000 connections.

The Si elegans project aims at providing this missing link. We propose the development of a hardware-based computing framework that accurately mimics C. elegans in real time and enables complex and realistic behaviour to emerge through interaction with a rich, dynamic simulation of a natural or laboratory environment. We will replicate the nervous system of C. elegans on a highly parallel, modular, user-programmable, reconfigurable and scalable hardware architecture, virtually embody it for behavioural studies in a realistic virtual environment and provide the resulting computational platform through an open-access web portal to the scientific community for its peer-validation and use.

Several innovative key concepts will ensure the accurate mimicry of the C. elegans nervous system architecture and function. Each of the 302 neurons will be represented by individual field-programmable gate array (FPGA) modules, each of them being independently and dynamically programmable with a user-specific and parameterised neuronal response model through a user-friendly neuron model submission and configuration facility or through selection from a library of pre-defined and tested neuron models. Pioneering interconnection schemes will allow dense module distribution and parallel, interference-free inter-neuron communication in a 3D space. In a closed-loop feedback design, this hardware blueprint of the C. elegans nervous system will control a biophysically correct virtual representation of the nematode body in a virtual behavioural setting. Instead of limiting its function and S&T impact by imposing pre-made models only, the Si elegans framework will be made available to the worldwide scientific community through an open-access web-portal. It will feature an intuitive and user-friendly remote configuration interface to define an unlimited number of neuron models and information processing hypotheses for automatic FPGA hardware configuration. This peer-participation concept will not only warrant the independent and unbiased functional validation of Si elegans, but permit the iterative optimization of neuron models and the asymptotical approach towards a holistic reproduction and understanding of the complete set of C. elegans behaviours and their underlying nervous system mechanisms through a set of reverse-engineering tools.

While Si elegans restricts itself to the emulation of the C. elegans nervous system, the underlying design concepts have universal application. Si elegans will constitute a generalizable framework from which the universal working principles of nervous system function can be induced, and new scientific knowledge on higher brain function and behaviour can be generated. More importantly, it will lay the foundation for exploring and refining new neuromimetic computational concepts and will provide a blueprint for the design of biologically inspired, brain-like parallel processing hardware architectures that are orthogonal to current von Neumann-type machines.

Visit the project website for more information.


This project will create a self-learning robotic ecology, called RUBICON (for Robotic UBIquitous COgnitive Network), consisting of a network of sensors, effectors and mobile robot devices.

Enabling robots to seamlessly operate as part of these ecologies is an important challenge for robotics R&D, in order to support applications such as ambient assisted living, security, etc.

Current approaches heavily rely on models of the environment and on human configuration and supervision and lack the ability to smoothly adapt to evolving situations.

These limitations make these systems hard and costly to deploy and maintain in real world applications, as they must be tailored to the specific environment and constantly updated to suit changes in both the environments and in the applications where they are deployed.

A RUBICON ecology will be able to teach itself about its environment and learn to improve the way it carries out different tasks. The ecology will act as a persistent memory and source of intelligence for all its participants and it will exploit the mobility and the better sensing capabilities of the robots to verify and provide the feedback on its own performance.

As the nodes of a RUBICON ecology will mutually support one another’s learning, the ecology will identify, commission and fulfil tasks more effectively and efficiently.

The project builds on many years of experience across a world-leading consortium. It combines robotics, multi-agent systems, novelty detection, dynamic planning, statistical and computational neuroscience methods, efficient component & data abstraction, robot/WSN middleware and three robotic test-beds. Validation will take place using two application scenarios.


The project will reduce the amount of preparation and pre-programming that robotic and/or wireless sensor network (WSN) solutions require when they are deployed. In addition, RUBICON ecologies will reduce the need to maintain and re-configure already-deployed systems, so that changes in the requirements of such systems can be easily implemented and new components can be easily accommodated.

The relative intelligence and mobility of a robot, when compared to those of a typical wireless sensor node, means that WSN nodes embedded in a RUBICON ecology can learn about their environment and their domain application, through the ‘training’ that is provided by the robot. This means that the quality of service which is offered by WSNs can be significantly improved, without the need for extensive human involvement.

Robot Identification

Principal Investigator: Professor Ulrich Nehmzow
Funding Body: Leverhulme Trust

The purpose of the "Robot Identification" project is to automate and formalise the process of generating mobile robot control code, so that "standard" behaviours will no longer have to be programmed, but will be obtained through automatic processes.

There are three main components to the project:

  1. We will formalise and standardise the robot code development process by using system identification techniques (NARMAX modelling) to express the perception-action coupling mathematically. Currently, robot control code generation has little theoretical underpinning, and is largely based on the programmer's expertise and intuition.
  2. We will develop methods to analyse and validate the resulting sensor-motor couplings, using control theory, sensitivity analysis and statistical techniques to assess safety critical issues. pioneerCurrently, there are few established formalised procedures that allow the assessment and analysis of robot control code, not least because there is no unified representation of code that allows the development of standardised analysis tools.
  3. We will develop model visualisation tools. As the complexity of robot control code increases, it becomes increasingly harder to "understand" the workings of a robot program. It is very hard to interpret even relatively simple robot control code qualitatively, to "read" thousands of lines of code is impossible, and standardised procedures to visualise the workings of robot code are required to address this issue.

    The fact that sensor-motor mappings generated in this project are of a uniform mathematical form (nonlinear polynomials or wavelets) simplifies code analysis
    and visualisation greatly.