A robust and accurate facial 3D reconstruction method from images acquired by mobile devices at home for facial growth monitoring
31 October 2023
Diseases affecting facial growth require highly accurate facial 3D scan to diagnose, monitor and plan treatment of the condition. Currently, the most used technique for this involves professional equipment, such as the Vectra system, in a clinical setting with healthcare professionals simultaneously capturing multiple overlapping images of the person’s face from several calibrated cameras with different viewpoints. These images are merged into a single 3D shape with digital stereo-photogrammetry which is considered a gold standard for structure analysis in facial anthropometry with sub-millimetre accuracy. Recently, 3D reconstruction using facial images captured by smartphone cameras has shown potential as an alternative with similar accuracy [9-12]. However, these studies were mostly carried out in clinical settings with healthcare professionals taking the images or used active depth sensors, a technology only available on high end devices. This project aims to alleviate the requirement of patients having to visit the clinic by developing a computer vision system to obtain highly accurate and dense facial 3D reconstruction from images acquired with end-user mobile devices in real world conditions, such as one’s smartphone at home through a purpose-made mobile application.
In Computer Vision, 3D reconstruction of the face from images acquired with uncalibrated cameras is a well-studied problem. Highly accurate 3D models can be obtained from multiple images with optimization-based methods relying on motion analysis and geometrical constraints [3,4]. Recently, deep learning approaches combining geometry and light properties estimation have emerged [5-7]. While they achieve impressive results and allow reconstruction from images captured by laypersons, their applicability for anatomical measurement has not been validated. Moreover, these methods do not leverage all the progress made to integrate multi-view geometry constraints in deep learning-based Structure-from-Motion [1,2].
In the proposed research, we will build upon these recent works and aim to produce an accurate facial 3D reconstruction method combining the well-established strengths of traditional geometry-based 3D reconstruction with the versatility of deep-learning-based facial reconstruction. Another important aspect of the envisaged system will be its robustness to challenging conditions due to the acquisition of the images being done by laypersons at home. This can result in blurry images, bad illumination, incorrectly positioned camera and partial occlusion of the face which can be minimised by guiding the user but not fully eliminated. Deep learning methods in Computer Vision are usually performing poorly in such conditions . Embedding 3D geometry constraints will allow the system to detect parts of the image which are invalid, like filtering erroneous correspondences between images in Structure-from-Motion pipelines. The project will aim to reach the sub-millimetre accuracy required for measuring anatomical structures for facial growth monitoring while being as robust as possible to real world conditions of home acquisition.
 Xingkui W, et al. DeepSFM: Structure From Motion Via Deep Bundle Adjustment. Proc. of the IEEE/CVF Eur. Conf. on Computer Vision, 2020.
 Jianyuan W, et al. Deep Two-View Structure-from-Motion Revisited. Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021.
 Agrawal S, et al. High Accuracy Face Geometry Capture using a Smartphone Video. Proc. of the IEEE/CVF Winter Conf. on Applications of Computer Vision, 2020.
 Booth J, et al. 3D Reconstruction of “In-the-Wild” Faces in Images and Videos. IEEE Trans. on Pattern Analysis and Machine Intelligence (Vol. 40, Issue: 11, Nov. 2018)
 Yandong W, et al. Self-Supervised 3D Face Reconstruction via Conditional Estimation. Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021.
 Abdallah D, et al. Towards High Fidelity Monocular Face Reconstruction with Rich Reflectance using Self-supervised Learning and Ray Tracing. Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021.
 Tianye Li, et al. Topologically Consistent Multi-View Face Inference Using Volumetric Sampling. Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021.
 Drenkow N et al. Robustness in Deep Learning for Computer Vision: Mind the gap? In ArXiV 2021.
 Nightingale RC, et al. A Method for Economical Smartphone-Based Clinical 3D Facial Scanning. Journal of Prosthodontics, 2020;29(9):818-825.
 Mai H and Lee D. Accuracy of Mobile Device–Compatible 3D Scanners for Facial Digitization: Systematic Review and Meta-Analysis. Journal Medical Internet Research, 2020.
 Gallardo Y, et al. Evaluation of the 3D error of 2 face-scanning systems: An in vitro analysis. Journal of Prosthetic Dentistry, 2021.
 Salazar-Gamarra R, et al. Monoscopic photogrammetry to obtain 3D models by a mobile device: a method for making facial prostheses. J. Otolaryngol Head Neck Surg. 2016;45(1):33.
How to apply
- Email Dr Ludovic Magerand to:
- Send a copy of your CV
- Discuss your potential application and any practicalities (e.g. suitable start date).
- After discussion with Dr Ludovic Magerand, formal applications can be made via our direct application system.
- For exceptional candidate, we can consider applying for a competition funding or might be able to obtain a direct funding.
- Both the May and September 2023 intake are suitable for this project.
The Chinese Scholarship Council provides opportunities for Chinese Students to undertake a PhD programme in any research field at the School of Life Sciences and the School of Science and Engineering. Successful applicants will receive support to enter the China Scholarship Council (CSC) competition scheme.