Automated Facial Animation Using Marker Point for Motion Extraction
DOI:
https://doi.org/10.38106/LMRJ.2024.6.4-07Keywords:
Tracker, Mahalanobis Distance, Huff-transform, Covariance, AlgorithmAbstract
In this research work, an automated 3D face expression generation technique is presented, which is extracted from real life video of face motion. The face expression is extracted from real human face using Huff-transform algorithm to gate the value of x coordinate and y coordinate, Covariance Matrix for detecting face marker points and Mahalanobis Distance to calculate the distance of each marker points within frames. The technique of tracking points on face uses markers placed on key positions of face muscles, then by getting its position from all frames of pre recoded face video using the distance algorithm the movement of each face muscle is detected and measured. The face muscles are marked with particular tracking markers that are detected and tracked by the system. This tracking occurs by using color segmentation, where we detect color of points and track the location and distance of each tracker points. The original and translated position values of each marker points are obtained and recorded in text file in vector values. The tracked values will be transferred in a 3D Animation software like MAYA and applied on a pre-Rigged 3D model of Human face. The 3D face will be rigged using joints to emulate the face muscle behavior.

Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 Mehran Syed, Zeeshan Bhatti, Azar Akbar Memon, Zia Ahmed Shaikh, Ahmed Muhammad Sheikh, Nisar Ahmed Memon

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright: Open access journal copyright lies with authors and protected under CC BY-NC-ND 4.0 licence (https://creativecommons.org/licenses/by-nc-nd/4.0/).