Agentic AI for Financial Crime Detection: Adversarial Co-Evolution Methods

Apply and key information  

This project is funded by:

    • Department for the Economy (DfE)

Summary

Financial crime detection systems struggle against adaptive criminals who evolve evasion tactics. Current compliance systems are reactive, detecting known patterns but remaining vulnerable to novel schemes.

This PhD develops autonomous AI agents that model the strategic co-evolution of financial crime schemes and detection systems, enabling proactive threat intelligence.

You will develop multi-agent systems where "fraudster agents" and "detector agents" engage in adversarial co-evolution through reinforcement learning.

Fraudster agents learn to evade detection; detector agents adapt to novel tactics.

This generates synthetic financial crime scenarios spanning money laundering, sanctions evasion, and mule networks, enabling stress-testing of compliance systems against threats that do not yet exist in operational data.

Working with Pytilia, a RegTech compliance technology provider, you will complete three annual placements (9 months total) gaining exposure to operational detection platforms and access to compliance experts for validation studies. Pytilia covers placement expenses and provides system access for stress-testing.

Training spans multi-agent reinforcement learning, evolutionary computation, adversarial machine learning, game-theoretic modeling, and financial crime compliance.

You will design agent architectures, implement co-evolutionary algorithms, and develop rigorous evaluation frameworks measuring adversarial robustness.

Outputs include an agent-based simulation toolkit, stress-testing methods, and academic papers in top AI and finance venues.

We welcome applicants with backgrounds in computer science, artificial intelligence, or computational modeling.

Skills required of the applicant:

Essential:

  • Strong programming skills in Python (NumPy, pandas, PyTorch or TensorFlow)
  • Machine learning fundamentals and algorithmic thinking
  • Ability to design and implement computational experiments
  • Clear technical writing for documentation and academic publication

Desirable:

  • Experience with reinforcement learning (RL frameworks: Stable Baselines3, RLlib, OpenAI Gym)
  • Multi-agent systems or game-theoretic modeling
  • Evolutionary algorithms or genetic programming
  • Financial compliance, regulatory technology, or financial crime prevention domain knowledge
  • Adversarial machine learning or AI security

Personal attributes:

  • Strong interest in adversarial AI, game theory, and security applications
  • Comfortable with interdisciplinary work bridging AI and finance
  • Intellectual curiosity about strategic agent behavior and co-evolution
  • Ability to engage effectively with industry stakeholders
  • Comfortable with proof-of-concept research where negative results are valuable

Essential criteria

Applicants should hold, or expect to obtain, a First or Upper Second Class Honours Degree in a subject relevant to the proposed area of study.

We may also consider applications from those who hold equivalent qualifications, for example, a Lower Second Class Honours Degree plus a Master’s Degree with Distinction.

In exceptional circumstances, the University may consider a portfolio of evidence from applicants who have appropriate professional experience which is equivalent to the learning outcomes of an Honours degree in lieu of academic qualifications.

  • A comprehensive and articulate personal statement
  • Research proposal of 2000 words detailing aims, objectives, milestones and methodology of the project

Desirable Criteria

If the University receives a large number of applicants for the project, the following desirable criteria may be applied to shortlist applicants for interview.

  • First Class Honours (1st) Degree
  • Masters at 70%

Equal Opportunities

The University is an equal opportunities employer and welcomes applicants from all sections of the community, particularly from those with disabilities.

Appointment will be made on merit.

Funding and eligibility

This project is funded by:

  • Department for the Economy (DfE)

Our fully funded PhD scholarships will cover tuition fees and provide a maintenance allowance of £21,000 (approximately) per annum for three years* (subject to satisfactory academic performance).  A Research Training Support Grant (RTSG) of £900 per annum is also available.

These scholarships, funded via the Department for the Economy (DfE), are open to applicants worldwide, regardless of residency or domicile.

Applicants who already hold a doctoral degree or who have been registered on a programme of research leading to the award of a doctoral degree on a full-time basis for more than one year (or part-time equivalent) are NOT eligible to apply for an award.

*Part time PhD scholarships may be available to home candidates, based on 0.5 of the full time rate, and will require a six year registration period.

Due consideration should be given to financing your studies.

Recommended reading

1. Wooldridge, M. (2009). *An Introduction to Multiagent Systems* (2nd ed.). Chichester: John Wiley & Sons.

2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. (2014). "Generative Adversarial Networks." *Advances in Neural Information Processing Systems (NIPS)*, 27, 2672-2680.

3. Biggio, B. & Roli, F. (2018). "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning." *Pattern Recognition*, 84, 317-331.

4. Tambe, M. (2011). *Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned*. Cambridge: Cambridge University Press.

5. Bolton, R.J. & Hand, D.J. (2002). "Statistical Fraud Detection: A Review." *Statistical Science*, 17(3), 235-255.

6. Financial Action Task Force (FATF) (2024). *Money Laundering and Terrorist Financing Typologies Report*. Paris: FATF/OECD.

The Doctoral College at Ulster University

Key dates

Submission deadline
Friday 27 February 2026
04:00PM

Interview Date
Tbc

Preferred student start date
14 September 2026

Applying

Apply Online  

Contact supervisor

Professor Barry Quinn

Other supervisors