Context and Rationale
Novelty detection refers to the process of identifying unforeseen anomalies that deviate from normal behaviours in data. It is very important in real world applications especially involving big data acquired from safety-critical systems, where novel conditions rarely occur and knowledge about a novelty is extremely limited or completely unavailable, whilst in such systems a very large number of data samples of the normal condition are usually available.
This project will develop novelty detection algorithms that make use of the sufficient available normal data to train/construct a reliable model, which in turn can be used to predict if new data is normal or abnormal.
Existing computational novelty detection techniques can broadly be classified into 5 general categories, depending mainly on the assumptions made about the nature of the training data : (i) probabilistic; (ii) distance-based; (iii) reconstruction-based; (iv) domain-based; and (v) information theoretic techniques. Their corresponding limitations are: (i) little control over inherent variability when the training set’s size is small; (ii) inability to efficiently cope with high-dimensional data; (iii) sensitive to pre-defined number of parameters; (iv) difficulty in choosing appropriate kernel function to control the size of boundary enclosing normal data; and (v) difficulty in associating a novelty score with a test point.
To solve these limitations of the state-of-the-art techniques , this project will focus on interdisciplinary fundamental research, such as employing the level set methods  and bioinspired computational  theories, to propose novel hybrid approaches for novelty surveillance on time-varying data, e.g. in capital market, healthcare, autonomous vehicles and other areas.
Relevance of the study
Artificial intelligence has spurred an era of data analytics that has the potential to revolutionise the way we work and live and many industries are realising that the data they collect have substantial value that can be leveraged to improve products, processes, services and productivity. The project will access real world datasets provided by industrial partners of the Cognitive Analytics Research Laboratory (CARL) or by research collaborators of the CARL team. CARL has its centre of operations in the Intelligent Systems Research Centre but is an Ulster University wide initiative focused on exploiting our track record of research excellence into neuro-inspired cognitive analytics, machine learning and computational intelligence.
The successful candidate should have an excellent mathematical foundation and will work within CARL and collaborate with multiple partners in academia and industry to thoroughly validate new algorithms and to create impactful technologies that can address problems experienced by industry today.
If the University receives a large number of applicants for the project, the following desirable criteria may be applied to shortlist applicants for interview.
Vice Chancellors Research Scholarships (VCRS)
The scholarships will cover tuition fees and a maintenance award of £14,777 per annum for three years (subject to satisfactory academic performance). Applications are invited from UK, European Union and overseas students.
The scholarship will cover tuition fees at the Home rate and a maintenance allowance of £ 14,777 per annum for three years. EU applicants will only be eligible for the fees component of the studentship (no maintenance award is provided). For Non EU nationals the candidate must be "settled" in the UK.
As Senior Engineering Manager of Analytics at Seagate Technology I utilise the learning from my PhD ever day
Adrian Johnston - PhD in InformaticsWatch Video
Monday 18 February 2019
19-20 March 2019
A key player in the economy of the north west
Monday 25 November 2019
Computer Science and Informatics