-
Notifications
You must be signed in to change notification settings - Fork 107
Open
Description
It's amazing work!
I found that the LLink Face with Metahuman-generated human data doesn't 'sync' or 'calibrated' properly.
Below is the captured expression from video, the smiling person. mediapipe tracked it quite accurately.
The metahuman response to MeFaMo-transfered data.
Is there any parameter or step I should check or need to improve?
In your demo video, your smile is quite well-synced.
https://www.reddit.com/r/unrealengine/comments/r8wbe3/my_livelink_facetracking_without_an_apple_device/
Does the fresh-exported metahuman data need blueprint modification following below link you mentioned?
https://docs.unrealengine.com/4.27/en-US/AnimatingObjects/SkeletalMeshAnimation/FacialRecordingiPhone/
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels