markerless facial animation capture and real-time viseme detection

This was a R&D project to explore the viability of open-source markerless facial animation using both facial landmark recognition from web camera and viseme detection using machine learning algorithms to parse audio and translate into convertible pre-created blendshapes in Maya or Unity.

Github Repository


  • Uses OpenCV and dlib in conjunction with OpenFace to use 68-point facial landmark detection to detect Action Units based on the FACS model.
  • Use ZeroMQ to communicate the Action Unit (AU) intensity to Unity3D/Unreal Engine 4 project for real-time demonstration or via a PyMEL pipeline to Maya for keyframe recording (TODO)
  • Use RNNNoise to denoise microphone input and feed it to VisemeNet to extract the relevant visemes. Apply it to a Daz3D Genesis 3 model in Maya via the PyMEL pipeline.
    • Responsibilities
    All Aspects
      Software Used
    OpenCV, Unity, Xcode, C++, Python, ZeroMQ, DAZ