Mass Effect’s Dialogue is Rather Massive
If you’ve watched the making of the film Final Fantasy: The Spirits Within, you would have noticed the painful approach Square Pictures took to match the spoken dialogue with the facial movement. For the most part, Square animators matched the lip movements by hand; something I was particularly impressed with. Unfortunately, the end result was far from perfect. The facial animations seemed “stiff” giving a false impression to the already realistic looking CGI surrounding it. Perhaps the CGI itself was far ahead of its time, but the phonetic interpretation technology was certainly not.
Here we are six years later, and it almost seems that technology might be catching up. Bioware will be taking advantage of new phonetic learning technologies with its upcoming RPG, Mass Effect, that delivers some pretty darned convincing facial expressions by video game standards; certainly close to Square’s manual method over 6 years ago. Since the technology Bioware is using is virtually automatic, that means more development time and resources can be devoted elsewhere. The Edmonton Sun posted an interesting article yesterday discussing Mass Effect’s use of cutting edge phonetic interpretation technology and how immensely useful it has been to the game’s development:
This is rarely a problem when video games are translated into other languages, for the same reason it’s relatively easy to dub a cartoon into another tongue: Most game character’s mouths flap like gibbering hand puppets, so they could be saying anything from “Have a fantastic day!” to “Die, filthy pig-dog!” and you’d never know the difference.
Which could have been a headache for the game’s developers, given that Mass Effect will be released simultaneously in English, French, Spanish, Italian and German when it goes on sale worldwide in late November. Take 30,000 lines of spoken dialogue, multiply it by five languages and you have an animation team’s worst nightmare.
Fortunately, the game’s custom-built underlying software actually “listens” to each voice actor’s recorded dialogue and shapes the on-screen character’s mouth movements to sync up with what’s being said, no matter what language they’re speaking.
How true. Even Source’s phonetic matching animation is a bit crude by today’s standards. In fact, before the original Half-Life, games didn’t even try a “flapping jaw” mechanism for simulating a realistic conversation. You’d have this nicely drawn character model telepathically initiating a conversation with you. Things got even worse when game developers couldn’t opt for multi-language packages, thus warranting the use of subtitles instead. In those days of stone age game dialogue, half the time you couldn’t tell if the subtitles were matching a voice in your head, the guy walking around in the distance, or the guy standing behind you flailing his arms. Ahh, the good ol’ days of FPS subtitle hell.
Given the limited resources available, I find it extremely cool that game developers are finding more automated ways of bringing us realism. Sure Mass Effect’s character facial animation will seem a little off at first. Given the sheer amount of dialogue your purchase will net you, I think you will find yourself more forgiving of ME’s trading quality for quantity. I guarantee by your 50th hour invested in Mass Effect you’ll have forgotten what you were picky about to begin with.
Via Edmonton Sun