Edge AI Researcher
Role details
Job location
Tech stack
Job description
We are seeking a motivated and curious Edge AI Researcher to join the Centre for Responsible AI and contribute to the objectives of national EPSRC Edge AI Hub.
Edge Artificial Intelligence is transforming the way intelligent decisions are made across sectors such as healthcare, mobility, industry, and smart environments. As Edge AI systems become more autonomous and pervasive, they introduce new challenges around safety, fairness, transparency, energy efficiency, and responsible deployment.
The EPSRC funded National Edge AI Hub (https://edgeaihub.co.uk/) addresses these challenges. The Hub brings together leading academic and industry partners to develop responsible AI methodologies that can operate effectively under resource constraints, uncertainty, and real-world variability.
The Centre for Responsible AI (https://www.hull.ac.uk/research/centres/centre-for-responsible-ai) at the University of Hull, led by Professor Dhaval Thakker, contributes centrally to the Hub's mission through research on explainability, fairness, safety, and human-centred AI system development. The Centre will host this full-time Edge AI Researcher, who will work within a vibrant, interdisciplinary environment informed by the University's long track record in AI, data science, and intelligent systems.
The post holder will undertake research into the design, development, and evaluation of trustworthy, explainable and fair AI techniques suitable for resource-constrained, real-time, and distributed environments. This post offers an excellent opportunity to contribute to a national consortium addressing cutting-edge scientific challenges while working across academic and industry partners. The successful candidate will join a vibrant interdisciplinary environment, with opportunities for professional development, collaboration, and publication., * Conduct research on Explainability, Fairness, Safety, and Trustworthiness in Edge AI systems.
- Develop and evaluate techniques suitable for resource-constrained, embedded or distributed AI settings.
- Work with Edge AI Hub industry partners to understand real-world challenges and co-develop solutions.
- Gather, prepare, analyse and interpret research data using appropriate scientific and computational methods.
- Conduct literature reviews and synthesise state-of-the-art findings in relevant subdomains.
- Contribute to Edge AI prototypes, models, evaluation methods, and experimental studies.
- Assist in the drafting of academic papers, technical reports, and project deliverables.
- Present research findings at internal meetings, consortium events, workshops, and conferences.
- Collaborate effectively with colleagues across the Centre for Responsible AI and Hub partners.
- Participate actively in training, professional development, and the wider research community.
Requirements
Do you have a Master's degree?, You will have:
- A Master's degree or be working towards a PhD in Computer Science or a related discipline with significant AI/ML focus.
- Hands-on experience applying techniques in Explainable AI, AI Safety or AI Fairness, evidenced by project work or academic publications.
- Experience in developing AI/ML systems using Python or similar tools.
- Strong analytical, problem-solving, and communication skills.
- A collaborative mindset and commitment to responsible, ethical innovation.
It would be an advantage if you also have:
- Experience with Edge AI frameworks (TensorFlow Lite, PyTorch Mobile, ONNX Runtime, TinyML).
- Familiarity with distributed AI systems, IoT platforms, or embedded ML workflows.
- Experience working on interdisciplinary or multi-partner research projects.
Benefits & conditions
£33,002.00