Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Are you actually using a mic for this, or is it just based on how your mouth looks?
What camera are you using?
Also about those bars, they are the open / closed or min/max states of that move. To calibrate them, put them both on the end, then (for example) close one eye and put the bar where the white bar is now, same goes for when you have it (good) open.
When I get the mouth to move, I usually have to move it VERY exaggerated and somewhat slower than normal human speech.
I find myself trying to put my face closer to the camera trying to get more mouth movement.
I'm using a Logitech C920 set to:
Image Quality: Right Light
Default brightness/contrast/color intensity (it auto adjusts).
Anti-flicker: NTSC - 60 Hz
Resolution: Max: 1080p
I had to use a zoom function within my logitech software, close it, then open the Facerig program to give it control of the camera driver while zoomed correctly to my face.
So the red bar is min and the green bar is max, correct? I don't have a white bar for most options. There's a black line that moves with my face. Some options have a white area with diagonal black lines in it, with 4 bars. 2 move the grey min/max areas. The other 2 move the white/hash lined area.
I'm trying to use the Lava Baron with the Tribal skin if that matters. The default racoon seems to SORT OF move his mouth when I do. But it isn't very pronounced.
Thanks for trying to help. :)
Avatar's lips are driven by video image and/or by audio signal from microphone. I suggest to test them individually. First, I would test the microphone way:
1. Toggle lipsync ON and face track OFF
2. The avatar should have an idle pose and lips should move based on the audio signal in your microphone. Are the lips moving? If yes, go to step 5. If not, let's investigate your audio device configuration.
3. Go to General Options -> Devices Tab on the menu on the right. Check if your microphone is selected in the Audio Recording Devices dropdown, if it's not, select it. If your microphone does not appear in the dropdown list, we should discuss this issue separately.
4. Go To General Options -> Sound Options tab. There is a green bar and a black bar.
The green bar represents the sound level in your microphone, it should move to the right as you talk, and when you are silent, it should stay to the left.
The black bar represents the sound level after which the audio signal moves the lips. Basically, the black bar represents a noise threshold. When the audio signal (green bar) has greater values than the noise threshold (black bar), then it means you talk, and the lips should be moving.
If the green bar is not moving and is staying in the right side, please use the Auto Calibrate button. While calibrating, you should stay silent (not talking), because we identify the noise level in your environment.
5. If you reached this step you should have your avatar moving his lips. If you consider that lips are not synced with the sound, you can configure a sound delay in General Options -> Sound Options tab.
Ok, let's now configure the video image tracker individually (without microphone driven lipsync).
NOTE: It's a long discussion here, we will make a detailed guide on this topic, but until then I will respond here :).
1. Toggle lipsync OFF and face track ON
2. Go to Advanced Face Tracking Calibration -> Expression Units Tab on the menu on the right.
3. Start to move your face features: mouth, eyebrows, eyes .. As you can see in all the elements in the list, there is a black bar that moves as you gesture. The black bar represents the tracked value for a specific feature. There are two types of elements in the Expression Units list: with two bars (red and green), such as Jaw Drop, or with two areas and four bars, such as Nose Up/Down or EyeBrows in, out.
The red bar represents the lower limit of the tracked value when the animation on the avatar is triggered at its lowest point. Same for the green bar, except that it represents the greater limit of the tracked value. Let's exemplify this on Jaw Drop. When you stay with mouth closed, the black bar is below the red bar, so the tracked value is below the lower bound. When you open your mouth, the black bar goes to the right and eventually reaches the green bar. If it's not reaching the green bar, then you should configure this unit by moving the green bar to the left until it has the correct value for your maximum value on this expression.
The elements with the hash areas and four bars work the same way, but with a neutral/dead zone in which the tracked value does not change the animation. They are in fact a combined element of two basic elements described above.
Let's look at the EyeBrow Left Interior. Drag the bars a bit to see something, (they are a little crowded, I know, we will change this soon :) ). You can see two hashed areas, one limited by two red bars, and one limited by two green bars. When the black bar is outside both (in the middle), the animation is neutral on this expression unit. When it enters the red zone, then it will drive the animation from neutral to its lowest point. Same for the green zone, but it will drive the animation from neutral to its highest point. As I said above, if the black bar does not reach the limits, calibrate the areas by moving the bars, such that all the interval is reached by the tracked value.
These are the basic guidelines for configuring face tracker and audio lip sync tracker. Until we publish a written or video guide on this topic I hope it helps you. :)
The mouth movements do work much better with some sensitivity/boost adjustments for my microphone.
I think most of the time I was trying to get more lip sync movement without a mic. But I don't need to do that luckily.
I will be messing with the expresison units just to aim for the most life like response to my movements with both face tracking and lip syncing on though.
This software is amazing, you're doing great so far. I look forward to updates.