Project details: https://www.findaphd.com/phds/programme/ucl-s-department-of-computer-science-offers-fully-funded-home-studentships-starting-september-2025/?p6777
How to apply: https://www.ucl.ac.uk/prospective-students/graduate/research-degrees/computer-science-4-year-programme-mphil-phd
See original post here: https://www.linkedin.com/posts/amir-patel_robotics-ai-sensorfusion-activity-7291932871211077633-x4_m?utm_source=share&utm_medium=member_desktop
About the Programme
UCL’s Department of Computer Science offers fully-funded Home studentships, starting September 2025.
Deadline for applications is 21st February 2025.
For information on projects, email the project advisors or cs.phdadmissions@ucl.ac.uk
Title: Explainable, Knowledge-driven AI and ML for Systems Security
Advisor: Dr Fabio Pierazzi f.pierazzi@ucl.ac.uk
Project 1:
The rapid advancement of AI and ML has opened new opportunities for improving systems security by automating the detection and mitigation of cyber threats. Adversarial actors actively adapt their strategies to exploit vulnerabilities in AI systems, leading to phenomena like concept drift, where models lose effectiveness as incoming data evolves. AI-driven defenses must contend with complex, dynamic environments and often operate under constrained resources. This dual-edged nature of AI in security presents risks and opportunities to enhance defenses against malware, network intrusions, and other threats.
This project aims to develop robust, explainable, and knowledge-driven AI/ML techniques for systems security. The research will explore methods to reduce spurious correlations in data, ensuring that models learn meaningful patterns rather than noise. While primary application domains may include malware detection and network security, these methodologies may be developed to broader domains.
Strong expertise in computer science, cybersecurity, AI/ML and Python is essential.
Title: Formal Verification for Trustworthy Human-Robot Collaboration
Advisor: Dr Pian Yu pian.yu@ucl.ac.uk
Project 2:
Recent advancements in technology have significantly expanded the scope of robotics applications across various domains such as industrial automation, autonomous driving, and social robotics. In these applications, interactions between robots and human users/agents are integral, constituting essential components of the overall system. Despite notable progress, existing methods struggle to provide assurances of safety or trustworthiness in human-robot collaboration due to the inherent unpredictability of human behaviour and increasingly uncertain environments.
This PhD project seeks to integrate methodologies from AI and formal verification to develop computationally efficient tools for designing trustworthy human-robot collaborative systems. It will investigate methods such as adaptive conformal prediction to efficiently quantify uncertainties in perception and prediction. It will explore multi-modal interaction and visual perception to enhance understanding between humans and robots. Finally, the project will develop efficient algorithms that combine probabilistic verification and synthesis for reliability assessment, performance evaluation, and decision-making support.
Title: Chasing the Cheetah: Pioneering Multi-Modal Sensor Fusion in the Wild
Advisor: Dr. Amir Patel amir.patel@ucl.ac.uk
Project 3:
Cheetahs epitomize speed and power in the animal kingdom, making them a captivating model to deepen our understanding of biomechanics, ecology, neuroscience, and evolutionary biology. However, field-based studies of fast-moving wildlife present significant challenges. Traditional approaches, such as IMU-GPS collars and RGB cameras, often fail to provide complete body motion or accurate 3D reconstruction. Moreover, gathering high-fidelity biomechanical data in uncontrolled environments has long been restricted by technical and logistical constraints, limiting our ability to study these remarkable animals in a truly naturalistic setting.
This PhD project will develop and evaluate a novel multi-modal motion capture framework for wildlife, integrating event cameras, audio signals, mmWave radar, and lidar, all of which are largely unexplored in this context. To overcome operational challenges in the field, the project will explore the use of drones or robotic platforms for sensor deployment. We aim to create high-fidelity, non-invasive datasets, thereby paving the way for more comprehensive insights into cheetah biomechanics and broadening our ability to study other wildlife species under realistic ecological conditions.
Title: Enhancing Mobility and Independence for People with Visual Impairments through Tailored Multi-Modal Interaction in Highly Automated Vehicles Leveraging Machine Learning
Advisor: Dr. Mark Colley m.colley@ucl.ac.uk
Project 4:
Over 270 million people worldwide have vision impairments. Highly Automated Vehicles (HAVs) promise improving transportation for people who are blind or visually impaired (BVIPs), who anticipate a significant increase in autonomy. Thus, HAVs are crucial for greater equality in transportation.
It is essential to understand the specific information needs of BVIPs in HAVs and to develop solutions that offer appropriate assistance. BVIPs rely on non-visual cues, to build situation awareness. With partial vision, visual cues remain important, underscoring the need for interaction concepts tailored to individual visual acuity.
This project leverages multi-modal interaction design, evaluating auditory, haptic, and combined modalities to address the unique needs of BVIPs in HAVs.
We will collaborate with the Global Disability Innovation Hub.
The PhD candidate will have a background in computer science or related fields.