Current projects in Intelligent Systems Research Centre
At ISRC we use world class facilities and ideas to make our work pioneering
Within the Intelligent Systems and Research Centre we have a variety of research areas that are being studied.
Northern Ireland Functional Brain Mapping (NIFBM) Facility
The NIFBM is the only brain imaging system anywhere in Ireland and only one of nine in whole of UK to use the recently developed brain imaging modality, Magnetoencephalography (MEG), to measure brain activity. This is one out of only 170 active MEG labs worldwide.
This facility is a joint investment of £5.3M from Invest Northern Ireland (INI) and Ulster University, and is located at Intelligent Systems Research Centre (ISRC) of Magee campus. It is equipped with the latest whole head 306 channels Elekta Neuromag MEG TRIUX system.
This unique infrastructure is available for clinicians and researchers from inside and outside of Ulster University. Please don't hesitate to contact us if you have any questions or if you are interested in visiting or accessing the NIFBM facilities and support.
- MEG provides a direct measure of electrical activity in the brain.
- MEG has a very high temporal resolution in the order of milliseconds (ms).
- MEG also has an excellent spatial resolution and is capable of localizing sources with an accuracy of millimetres (mm).
- It is a non-invasive technique which does not require any contrast injection.
- It is comfortable for subjects/patients. Unlike MRI, MEG does not produce any noise during scanning
In clinical applications, MEG is used for pre-surgical evaluation of brain tumour, arteriovenous malformation (AVM) and epileptic focus.
In research, MEG is known to produce better localized responses in the brain as a result of a better signal-to-noise ratio compared to EEG and a more straightforward biophysical formulation. It is widely used to study cognitive processes.
Although less explored but it can be used for the development of Brain-Computer Interfaces, and an analogue to EEG applications. In this case, it can be used essentially to inform EEG about effective biomarkers and learn better about brain dynamics using more active stimulation tasks.
Our Magnetoencephalography (MEG) laboratory is equipped with the latest whole head 306 channels Elekta Neuromag MEG TRIUX system. MEG is a modern non-invasive neurophysiological technique for measuring magnetic fields generated by neuronal activities inside the brain.
The MEG activity is acquired by an array of superconducting quantum interference devices (SQUIDs) placed close to the scalp which is capable of measuring magnetic fields on the femtotesla (fT) scale. Since the signal from the brain is too weak (~ 10-13 T) compared to the ambient magnetic noise in an urban environment (~ 10-7 T), MEG needs a special shielding from external magnetic signal. For this purpose, our MEG system is housed in a magnetic shielded room (MSR) for reducing noise from the surrounding environment.
We offer several techniques for brain stimulation and analysis that allow us to obtain better results and guarantee a good service for both research and clinical applications. Our MEG services can be booked using our NIFBM Booking System, which will soon be made available here.
Our lab is equipped with an MEG compatible BrainCap of 128 high-quality Ag/AgCl EEG electrodes from Brain Products GmbH. Electrodes are fixed to the cap and extremely flat which makes BrainCap very comfortable for the subject (e.g. avoiding excessive pressure onto the scalp, even in sleep studies during which the subject lies on the electrodes).
All EEG electrodes are buttoned directly into the cap (total height 3.5 mm) or can be attached to the skin with washers. This device facilitates concurrent MEG-EEG recording that allows an effective measurement of the brain activity.
Earphone for Auditory stimulation
Auditory stimulation is provided using an earphone, which is especially designated for MEG applications.
The stimuli are provided using stimulation programs that are installed in the STIM computer (STIM2) outside the MSR. The stimuli are applied with high precision having an extremely small delay of less than 20 milliseconds.
Visual stimulation is provided from the STIM computer through a Panasonic projector model PT-DS12KE (placed outside the MSR) projected onto a MEG compatible Elekta screen located inside the MSR. This allows a clear representation of the different scenarios of visual stimuli tasks, while keeping the image presentation delay shorter than 50 milliseconds.
Electrical non-harmful stimulation is provided with small foam pads to stimulate instantaneously the median nerve on the Left/Right wrist. This stimulation will cause the thumbs to twitch and a trigger which is simultaneously recorded with the MEG signal.
Motor or response pads
We also have MEG compatible motor or response pads. Volunteers/patients will be asked to finger-tap the motor pad for performing Magnetic Evoked Field (MEF) tests. These pads can also be used as a response button for cognitive tasks.
OUR NEURO-IMAGING SERVICES
We offer high quality neuro-imaging for both clinical and brain research purposes.
We adhere to highest standards of clinical practice. Following are the main highlight of our practice:
- MEG SQUID sensors are regularly checked for signal quality and are tuned and calibrated using phantom on a weekly basis. Also, a quick signal quality check is performed before every recording session.
- Before the commencement of recording, MEG staff provide a briefing about the MEG characteristics, requirements for the execution of mental tasks and what patients/subjects should do and avoid during the performance of a particular study task.
- Patients/volunteers will be thoroughly screened to check that they are suitable for the MEG recording (e.g. screening for possible metallic implant, braces, surgical aneurysm clips, etc., also for medical conditions that might disqualify them to perform certain mental tasks, while in the MEG scanner).
- After successful screening, the subject preparation is done on a special chair or MEG compatible bed. For concurrent MEG-EEG recording, subjects need to wear a 128 EEG channels BrainCap and will undergo the same preparation as for normal EEG recording. It might take up to 1 hour for EEG preparation. We may skip this part if the study only requires MEG recording as explained next.
- As part of the preparation for MEG recording, 4 or 5 head position indicator (HPI) coils will be attached to the subjects’ head using durapore tape. These coils allow to track head movement during the scanning. Then HPI coils and their whole head will be digitised using staylus pen in order to obtain a head shape for possible co-registration with their brain anatomical image. It might take approximately 15-30 minutes for MEG preparation and digitisation.
- Finally, the subjects will then be taken into the MSR and sit on a MEG chair or lay down on MEG bed and their head will be placed inside the MEG helmet as close as possible to the top of the helmet surface. They will be asked to keep still and be relaxed while performing the study tasks during the recording.
Ongoing research projects:
- Connectivity Studies in Mild cognitive Impairment
- Single-trial visual decoding for MEG based brain-computer interface (BCI)
- A BCI Operated Hand Exoskeleton based Neurorehabilitation System for Movement Restoration in Paralysis
- MEG based neuronal characterisation of tinnitus
- EpiFASSTT: Epigenetic effects on children's psychosocial development in a randomised trial of Folic Acid Supplementation in Second and Third Trimester
- MONETA: Extraction and modelling of Alzheimer’s disease data for patient stratification: a novel integrative, multiplex network approach
Clinical research for pre-surgical evaluation of epileptic foci
Upcoming research projects:
- Development of Biomarkers for the Stratification of Patients with Alzheimer's disease (AD)
- Development of Advanced Analysis Software for MEG Imaging
Professor Girijesh Prasad
Professor of Intelligent Systems,
Director, Northern Ireland Functional Brain Mapping Facility,
MS218, Intelligent Systems Research Centre,
School of Computing and Intelligent Systems,
Faculty of Computing & Engineering, Ulster University, Magee Campus,
Derry~ Londonderry BT48 7JL, N. Ireland, UK.
T: +44 - (0)28 71 - 675645, 675409
MS229, Northern Ireland Functional Brain Mapping Facility
Intelligent Systems Research Centre,
Ulster University, Magee Campus,
Derry~ Londonderry BT48 7JL, N. Ireland, UK.
Mitochondrial G Protein signaling in astrocytes: a new player in the tripartite synapse
Collaborators: Intelligent Systems Research Centre (CNET group) at Ulster University, NeuroCentre Magendie. AVENIR Group "EndoCannabinoids and NeuroAdaptation at the University of Bordeaux in France, Laboratory for Neuron-Glia Circuitry at the RIKEN Brain Science Institute in Japan and the Department of Neuroscience at the University of Minnesota, USA
Astrocytes, classically considered as simply supportive cells for neurons, are emerging as relevant elements in brain information processing through their ability to regulate synaptic activity. Indeed, the tripartite synapse formed by pre- and post-synaptic neurons and surrounding astrocyte structures has been proposed as a functional unit of brain processes. These novel and important functions of astrocytes are under control of G protein signaling-dependent processes, which trigger astrocyte Ca2+ signals eventually leading to the release of gliotransmitters and other mediators regulating synaptic functions.
Recent evidence indicates that the roles of mitochondria in the brain may go beyond the mere needs of energy supply for cell survival and maintenance, being possibly involved in the regulation of synaptic functions. It is conceivable that mitochondrial G protein signaling participates in these processes. Various subtypes of G protein-coupled receptors (GPCRs) and the associated signaling molecular elements are present within mitochondria, suggesting the existence of mitochondrial G (mtG) protein signaling pathways. Mitochondrial type-1 cannabinoid receptors (mtCB1) are an example of GPCRs regulating mtG protein signaling in brain astrocytes. Thus, astroglial mtG signaling potentially plays an important role in the regulation of tripartite synapse and hence brain functions. However, no studies have addressed this issue so far. The present project proposes to investigate the consequences of the activation of astroglial mtG signaling pathways in brain physiology, identifying the underlying mechanisms at cellular, network, behavioral and theoretical modeling levels. We propose to generate and develop novel pharmacogenetic tools (DREADDs specifically expressed by astroglial mitochondria, mtDREADDs) that will allow the specific control of astroglial mtG protein activity. We will experimentally and theoretically analyse the consequences of activation of different mtG proteins (via mtDREADDs or mtCB1 activation) on neuronal, synaptic, and network activity as well as brain functions in living animals. The expected results will reveal novel processes of cellular signaling in the CNS, and will identify new regulatory mechanisms mediated by astroglial cells in brain function.
SPANNER: Self-repairing Hardware Paradigms based on Astrocyte-neuron Models
Collaborators: Intelligent Systems Research Centre (CNET group) at Ulster University and the Department of Electronic Engineering at the University of York
Living organisms are complex systems, and yet they possess an extremely high degree of reliability. Failure mechanisms in nature are often local and their repair is also undertaken at this local level. In engineering however, we have traditionally approached the problem of unreliability from the system or sub-system level. That is, we have incorporated redundancies by replicating entire systems or sub-systems, in the hope that at least one would still function faultlessly when the others fail. It has been suggested recently that interaction between neurons and astrocytes may hold the key to repair in large networks of neurons. We could therefore, justifiably ask the question whether it might not be more effective, efficient and less costly to draw inspiration from nature, seeking to learn how it deals with the complexity vs. unreliability issue in such a remarkable way. This project is an inter-disciplinary collaboration that arose naturally from the combined expertise of the Intelligent Systems Group (ISG) at the University of York (UY) and the Intelligent Systems Research Centre (ISRC) at Ulster University (UU). The fundamental astrocyte-neuron computer model for self-repair proposed in this project was proven in an earlier EPSRC eFutures funded project (EFXD12011) where it was demonstrated that the co-existence of astrocytes with spiking neurons in a network can yield a fault diagnostic and repair capability at the cellular level, which addresses current hardware reliability challenges [1-6]. This proposed project will demonstrate that the Self-rePairing spiking Neural NEtwoRk (SPANNER) is capable of diagnosing faults and subsequently performing repairs beyond existing levels, where the repair capability is showcased in hardware using real-time robotic applications.
Self-Repairing Neural Controllers for Autonomous Chemical Identification
Collaborators: Intelligent Systems Research Centre (CNET group) at Ulster University and the Department of Electronic Engineering at the University of York
This project will demonstrate a fault tolerant autonomous robotic system that is able to continually detect, in real-time, changes in the air environment, and construct a hazard map of potential threats. At the same time the system will have an autonomous control and navigation system implemented using a state of the art approach based on self-repairing neural networks. These networks go way beyond traditional paradigms by including astrocyte cells which have recently been modelled to capture the repair capability exhibited in the human. This will afford a resilience to various types of potential failure within the controller; resilience of the controller is initially proposed as way of demonstrating capability. Human operators will be able to provide real-time feedback to the performance of the system to allow for machine learning to improve the overall performance of detection and navigation through a reinforcement based approach to learning. This proposal will deliver a proof of principle demonstrator of a fault-tolerant autonomous robotic system, capable of mapping hazards chemical environments and identifying key hazards of interest. Human users will be able to have real-time feedback of the hazard map and items identified, as well as perform limited control of the unit to compliment the units autonomy. The work is based on previous research, by the investigators, on self-repairing neural networks, real-time anomaly detection and robotic deployment.
BIONICS: BIO-iNspired survellianCe System
Automated artificial vision technology is becoming more widely adopted for monitoring and surveillance applications to reduce the need for physical resources and human intervention. These systems suffer from weaknesses in artificial vision technologies resulting in imprecise learning and false or missed detections. Biological visual systems are vastly superior in terms of performance for real-time and low-power applications when compared with conventional artificial vision technologies. To enable automated surveillance technology to cope with the challenges of dynamic visual conditions, existing artificial vision technology must be improved. Biologically inspired artificial vision algorithms have already demonstrated their effectiveness in dealing with these challenges.
BIONICS is an InvestNI Proof of Concept project building on the retinal ganglion cell models developed in the VISUALISE project. BIONICS aims to take the VISUALISE ganglion cell model technology towards a marketable product of a biologically inspired hardware image encoder and feature detector for security cameras.
Computational Approaches to real-time energy trading
Collaborators: Intelligent Systems Research Centre (Cognitive Robotics group) at Ulster University and ClickEnergy
The ability to collect, manage, analyse and report on massive volumes of market data is vital to long term market success. This project will focus on the development of computational intelligence algorithms to analyse and interpret the wide variety of data that have a direct impact. This project is funded by Innovate UK under the Knowledge Transfer Partnerships scheme.
It is well known that various terrorist and criminal organisations are communicating undetected via digital images using advanced steganography techniques through the Internet. Secret messages encrypted and encoded in various ways can be sent over the internet or posted somewhere on a website. Encrypting coded messages in digital images, such as pornographic pictures, is the least detectable way for terrorists and criminals to communicate. It is extremely difficult for investigators to track such steganography across the internet because, by its very nature, it is difficult to detect. The art of discovering the existence of steganographic data or secret messages in a digital media is called steganalysis.
FLAME (Forensic image anaLyzer frAMEwork) detects the presence of hidden messages in images sent over the internet or incorporated into a website page. The FLAME core technology consists of machine learning based steganalysis algorithms and an image tracking algorithm. FLAME will therefore be at the core of a system capable of locating, collecting, and analysing images placed or exchanged over the Internet by terrorists and cyber criminals. FLAME is funded by Ulster Research Impact Award fund.
Key technical & innovative features:
FLAME consists of four components: a crawler module, a decision module, a steganalysis module and a database. The crawler module will be fed with a series of start URLs (website addresses) to begin crawling. Based on the seed URLs subsequent URLs will be discovered by parsing the contents of the starting URLs. The crawler will attempt to locate image files at these URLs and pass them to the decision module.
The decision module decides if the image has already been encountered by calculating the cryptographic hash of the image (SHA1) and looking it up in the database. If the image exists, the image is discarded from the temporary store. If not, the image will be stored in the database.
The steganalysis module obtains images from the database and tests them for steganographic content using open source image steganalysis sources such as Outguess, novel high level intelligent machine learning algorithms and statistical tests. If the image is found to contain steganographic content, it will be tested further to extract the hidden content from the image if possible. If not, the image is marked as analysed.
The images (both analysed and tested for hidden content extraction) are preserved until all results have been consolidated. Each of the four components can work independently of each other in order to allow for separation of concerns, as well as allowing for the possibility of running them on separate machines.
FLAME early interest and endorsement:
A white paper based on the FLAME technology was submitted to the U.S. Army CyberQuest 2016 competition (http://intelligencecommunitynews.com/army-announces-cyber-quest-2017/) and FLAME was selected for the second stage of the competition. This initial acceptance shows the potential of our technology given the US Military endorsement.
Legal Innovation Centre
Collaborators: Allen & Ovary, Baker McKenzie, HookTangaza, Caselines, Clio, Invest NI
The legal innovation centre will create a bridge between Law and Computer Science to the end of fostering and creating legal service provision innovation and advancing educational provision in Legal Technology. See https://www.ulster.ac.uk/legalinnovation for more information.
Royal Academy of Engineering - Ingenious Award
Collaborators: STEM Aware, Bespoke Communications
This Public engagement award is to publicise the need for cyber security specialists in addition to creating a cost-effective model to train degree-qualified engineers in the art of inspiring students to follow an engineering career.
RAEng/Leverhulme Trust Senior Fellowship
This Fellowship concentrates on reusing the existing WiFi AP deployment in homes for location-oriented activity recognition without the need for additional static Wireless infrastructure or wearable sensors. It also integrates device free passive localisation techniques.
HSC Commissioned Research - Dementia Care
The aim of this feasibility study is to investigate the effects of individual specific reminiscence, facilitated using bespoke software, on people with dementia and their carers.
Knowledge Transfer Partnership
Collaborators: Sentel Ltd, Belfast
This KTP aims to incorporate machine learning intelligent techniques within their call monitoring platform enabling identification of call fraud in real-time.
Fluid Software Interfaces (InterTrade Ireland)
Collaborators: Verify Technologies, Limerick
This Fusion project aims to create an automated customer portal 'configuration engine' to replicate the process of selecting a customer’s products features.
Management Information Systems (InterTrade Ireland)
Collaborators: Reprographic Systems Ltd, Dublin
This Fusion project will design and develop a bespoke, innovative, and integrated Management Information Systems (MIS) to facilitate change and enhanced efficiency.
Invest NI Proof of Concept
Collaborators: BBC, Vicomtech
This Proof of Concept (PoC) project titled “Broadcast Language Identification & Subtitling System (BLISS) investigates language identification and captioning and subtitling within the entertainment industry.
Predicting short-term wholesale prices on the Irish Single Electricity Market with Artificial Neural Networks (InterTrade Ireland)
Collaborators: Ark Energy Ltd, Longford
This Fusion project is to develop forecasting algorithms in order to predict short-term (72 hours ahead) wholesale prices on the Irish Single Electricity Market so that customers can make more informed trading decisions.
A BCI Operated Hand Exoskeleton based Neurorehabilitation System for Movement Restoration in Paralysis
This project is funded by UK India Education and Research Initiative (UKIERI) phase-II and the Department of Science and Technology (DST) Government of India under the DST-UKIERI Thematic Partnership program (DST-2013-14/126). This international collaborative project has three main components brain-computer interface (BCI), exoskeleton, and rehabilitation which are being undertaken under respective leaders Professor Girijesh Prasad of Neural Systems and Neuro-technology Research Team, Professor Ashish Dutta of Indian Institute of Technology Kanpur (IITK) India, and Professor Suzanne McDonough of Institute of Nursing and Health Research under the co-ordination of the PI Professor Prasad. Main project objectives are as follows:
- Develop a lightweight three-finger exoskeleton with embedded sensors, capable of replicating human motion for physical practice. It will be controlled by users’ EMG and EEG signals in assist-as-needed mode;
- Develop a novel brain-computer interface (BCI) that facilitates EMG and EEG for controlling the exoskeleton and provides visual neurofeedback to ensure focused physical and MI practices;
- Conduct pilot trials to evaluate the effectiveness of the exoskeleton along with BCI in movement restoration
It is known that much enhanced upper limb recovery can be gained if stroke sufferers with limb impairments, perform intensive active physical practice (PP) in conjunction with motor imagery (MI) practice (or mental practice) of activities of daily living.
Although a PP can be performed with the help of a therapist, it is expensive and limited and dependence on the therapist may lead to a passive practice. To this end, it is proposed to investigate development of a lightweight neuro-rehabilitation system for people with stroke that facilitates intensive active PP as well as MI practice with the help of a robotic exoskeleton and neuro-feedback from a novel non-invasive brain-computer interface (BCI). It will consist of a three-finger exoskeleton that can be worn by the subjects and will be controlled through the users’ Electromyography (EMG) and Electroencephalography (EEG) based BCI commands in assist-as-needed mode.
Additionally visual neuro-feedback from BCI will help ensure highly focused performance of PP as well as MI practice. The exoskeleton will be superior to existing ones as it will be able to replicate natural human finger motion with more degrees-of-freedom and be directly aimed at restoring critical hand functions, for grasping and manipulation of objects. The system will undergo pilot trial on a set of healthy individuals as well as people with impairments.