Automated Facial Animation Using Marker Point for Motion Extraction

Authors

  • Mehran Syed A.H.S Bukhari Postgraduate Institute of Information and Communication Technology, University of Sindh, Pakistan
  • Zeeshan Bhatti A.H.S Bukhari Postgraduate Institute of Information and Communication Technology, University of Sindh, Pakistan
  • Azar Akbar Memon A.H.S Bukhari Postgraduate Institute of Information and Communication Technology, University of Sindh, Pakistan
  • Zia Ahmed Shaikh A.H.S Bukhari Postgraduate Institute of Information and Communication Technology, University of Sindh, Pakistan
  • Ahmed Muhammad Sheikh Cleveland State University, USA
  • Nisar Ahmed Memon A.H.S Bukhari Postgraduate Institute of Information and Communication Technology, University of Sindh, Pakistan

DOI:

https://doi.org/10.38106/LMRJ.2024.6.4-07

Keywords:

Tracker, Mahalanobis Distance, Huff-transform, Covariance, Algorithm

Abstract

In this research work, an automated 3D face expression generation technique is presented, which is extracted from real life video of face motion. The face expression is extracted from real human face using Huff-transform algorithm to gate the value of x coordinate and y coordinate, Covariance Matrix for detecting face marker points and Mahalanobis Distance to calculate the distance of each marker points within frames. The technique of tracking points on face uses markers placed on key positions of face muscles, then by getting its position from all frames of pre recoded face video using the distance algorithm the movement of each face muscle is detected and measured. The face muscles are marked with particular tracking markers that are detected and tracked by the system. This tracking occurs by using color segmentation, where we detect color of points and track the location and distance of each tracker points. The original and translated position values of each marker points are obtained and recorded in text file in vector values. The tracked values will be transferred in a 3D Animation software like MAYA and applied on a pre-Rigged 3D model of Human face. The 3D face will be rigged using joints to emulate the face muscle behavior.

Downloads

Published

2024-12-31

How to Cite

Syed, M., Bhatti, Z., Memon, A. A., Shaikh, Z. A., Sheikh, A. M., & Memon, N. A. (2024). Automated Facial Animation Using Marker Point for Motion Extraction. LIAQUAT MEDICAL RESEARCH JOURNAL, 6(4). https://doi.org/10.38106/LMRJ.2024.6.4-07