Real-time shape tracking of facial landmarks

H Kim, H Kim, E Hwang - Multimedia Tools and Applications, 2020 - Springer
Multimedia Tools and Applications, 2020Springer
Detection of facial landmarks and accurate tracking of their shape are essential in real-time
applications such as virtual makeup, where users can see the makeup's effect by moving
their face in diverse directions. Typical face tracking techniques detect facial landmarks and
track them using a point tracker such as the Kanade-Lucas-Tomasi (KLT) point tracker.
Typically, 5 or 64 points are used for tracking a face. Even though these points are enough
to track the approximate locations of facial landmarks, they are not sufficient to track the …
Abstract
Detection of facial landmarks and accurate tracking of their shape are essential in real-time applications such as virtual makeup, where users can see the makeup’s effect by moving their face in diverse directions. Typical face tracking techniques detect facial landmarks and track them using a point tracker such as the Kanade-Lucas-Tomasi (KLT) point tracker. Typically, 5 or 64 points are used for tracking a face. Even though these points are enough to track the approximate locations of facial landmarks, they are not sufficient to track the exact shape of facial landmarks. In this paper, we propose a method that can track the exact shape of facial landmarks in real-time by combining a deep learning technique and a point tracker. We detect facial landmarks accurately using SegNet, which performs semantic segmentation based on deep learning. Edge points of detected landmarks are tracked using the KLT point tracker. In spite of its popularity, the KLT point tracker suffers from the point loss problem. We solve this problem by executing SegNet periodically to recalculate the shape of facial landmarks. That is, by combining the two techniques, we can avoid the computational overhead of SegNet and the point loss problem of the KLT point tracker, which leads to accurate real-time shape tracking. We performed several experiments to evaluate the performance of our method and report some of the results herein.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果