Steve Haynes
The growth of the international
film market has seen a consistent rise in volume and sophistication of
'foreign language versions'. Steve Haynes tells Richard Buskin about dubbing
and Rugrats
AFTER A BRIEF STINT as a 35mm cinema projectionist, Steve Haynes joined
the University of York's Educational Technology Department as a junior
technician during the late 1960s and experienced his own form of heaven.
A radio ham and keen musician who built his own guitar and amplifier,
Haynes loved an environment that afforded him the opportunity to dabble
with all forms of recording equipment. Soon, however, he felt drawn towards
the twin worlds of film and TV, and this interest led to a job as a trainee
assistant sound recordist at Yorkshire Television.
In the early 1970s practically everything was still shot on film, and
so Haynes initially worked on the road as a member of the film crew, before
satisfying his desire for more creativity by then moving into postproduction
dubbing. Now immersed in both music and technology, he was at last doing
what he really wanted, and eventually became YTV's head of postproduction
sound. Haynes worked on a wide range of the network's biggest productions,
from A Touch of Frost to The Darling Buds of May and became a devotee
of the AMS Neve AudioFile and the Logic console, and in the early 1990s
YTV purchased the first Logic II. In 1997, lured by the new technology
as well as the prospect of film work, he returned to London and a job
with DB Post Production. Three years later he became director of sound
at Lip Sync, where he is a re-recording mixer.
Established as a production house 14 years ago, Lip Sync specialises
in film and television trailers, adverts, and TV production and postproduction.
However, it was Steve Haynes who accepted the responsibility of setting
up the company's fully-fledged sound facility in London's Soho district,
comprising two large studios each equipped with a 32-channel DFC, THX
monitors and Dolby EX Surround, two smaller studios, two prep rooms, and
a central apparatus room. Half a dozen AudioFiles are at the heart of
a comprehensive array of gear.
In addition to high-end TV work, Haynes also enjoys mixing foreign-language
movie dubs, something which he first did at DB Post. This came about because
many of the recordings made in overseas territories were not only sub-standard,
but also disparate due to differences in culture and approach.
'You must have seen those English-language films dubbed in Italian,'
says Haynes. 'Everything's horribly close-miked and it just sounds like
it's been dubbed. It doesn't sound like it's part of the film.'
Consequently, in an effort to ensure that foreign audiences enjoy a largely
similar movie experience, the filmmakers have taken charge of the situation
by supervising the dubbing and centralising the mix. This is where Lip
Sync fits in, Steve Haynes having recently worked on the Hindi, Telugu
and Tamil versions of Gladiator, and the Japanese, Latin American Spanish
and Brazilian versions of The Grinch Who Stole Christmas, as well as the
Flemish, Dutch, Danish, Spanish and German versions of Rugrats in Paris.
Q Does this more controlled
process of dubbing and mixing mean that foreign audiences are no longer
getting what they are necessarily accustomed to?
Well, no matter the language, I think directors have a particular idea
as to the way in which they want audiences to experience a film. You know,
when you're doing a foreign version, the idea is to emulate exactly the
original sound. For instance, in Gladiator, when he shouts in the Colisseum,
you get that lovely slap echo off the walls. You've got to have that same
sound. He's got to be distant and with just the right echo. The only difference
is, whatever he says is in another language.
Q More use of subtitles would
certainly cut down on your work...
There aren't many films with subtitles that I've enjoyed. Your eyes dwell
on the subtitles and you miss a lot of what's in the visual.
Q Well, at least you're hearing
the true voices.
Yes, but if the foreign version is done well, then the difference should
be minimal. I supervised the Hindi version of Gladiator, and the casting
director had done a fantastic job choosing the voices. I mean, the guy
who did Ollie Reed's voice really did sound like he smoked 50 cigarettes
a day -- It matched exactly, and it was like Ollie Reed speaking Hindi.
It was really, really good. There again, for Russell Crowe's character,
the actor had that same dark brown kind of voice, and it was a joy to
watch.
Q While the studios want to
ensure standardisation of the mix, they obviously don't have the time
or resources to take care of all this post work...
Oh God, no. I mean, Gladiator went out in at least 22 different languages,
and that's an enormous amount of work. I think The Grinch went out in
30 languages. It's big business, and it would just clog up the studios.
Q While you did the Indian
language versions of Gladiator,who took care of the others?
They were all spread among the various studios.
Q But wasn't that defeating
the original objective of having a central place to do the mix?
Well, all of the studios are pretty close together. They're nearly all
based in Soho--they're certainly all based in London--and we all work
in a pretty similar way. If, for instance, you've got a postproduction
supervisor coming from a firm like UIP--which is handling a Nickleodeon
film like Rugrats--then he will be able to go around from studio to studio.
After all, if you did it all in one studio you'd be doing it for weeks.
It's almost a week per film, and if you have, say, 20 different languages,
it would take too long to get them all done. You know, who would you put
last? These things tend to get released in America first, then in the
UK, then in Europe, then in Eastern Europe, and so on, but not over the
course of 20 weeks.
Q So from the film companies'
perspective, what they are interested in is not so much everything being
done on identical gear and by the same people, but that there is a standard
level of facility and ability.
Yes, and we're all mates anyway. We all know one another, and I think
the technical standards are similar.
Q What are the specific problems
and solutions of different language versioning? Do different languages
present different problems?
Yes, they do. Some languages are quite similar to English, but others
are quite different and they will take much longer to say, "Yes, he's
over there." They'll jabber on for hours.
Q Hasn't care been taken during
the translation and recording processes to match mouth movements?
Yes, but obviously not all actors are in shot all of the time, so you
can often start a sentence when you're looking at the back of someone's
head, and then cut 'round to his face and end the sentence when his mouth
stops moving. There are lots of little tricks like that which you can
use to make it work; you can sync it up at the front or at the end, and
you can put a little cut where someone takes a breath
Q But where the language is
more elaborate, isn't that taken into consideration during the recording?
Yes, they do try, and often they do have a slightly different slant on
the story. I mean, Japanese can certainly take a long time to say the
equivalent thing...
Q Does lip syncing almost go
out of the window at that point?
No, we try not to. Our main consideration is always to give the cinema-goer
the impression that it's been done in their language.
Q Do you speed up the dialogue
to fit it in?
Well, occasionally I have done this, but not by a lot. Obviously, you
don't want people talking like lunatics. You've got to be able to listen
to it. When we do the Dolby we have a native speaker present, and this
person can point out if we've cut a syllable or something.
Q How do you set up the desk?
Is there a general approach for this type of work?
Yes. The M&E generally comes in on a DA-88--or sometimes on an Akai
MO--and the voices come in the same way, and what we usually do is put
them into the AudioFile so that we can manipulate them. It's so quick--If
you think, "I wonder if that'll look better a frame later," you can do
it in a trice. So I usually set up the desk where one stem will be the
final mix. I'll also usually have the print master up in the AudioFile,
because while we do run the film when we're mastering it to the Dolby,
what we do islift the film up onto a VMOD. This saves a lot of time beating
up and down the film; you can go back 15 seconds in half a second.
So, we run that, and we also have the English print master sound in the
AudioFile, which I send to another set of tracks, and then I'll send the
dialogue--with reverb and anything else that we've added--to yet another
set of tracks. This means that you can then record a clean set of dialogues.
You're obviously recording a final mix, but also the idea is that I can
switch between the stems on the desk very quickly--it's just a little
matrix of buttons--and I can compare what was done in English to the way
we've got it. You know, "Was his voice really that loud there?" or "Did
it have that amount of echo on it?" You can just switch quickly between
them. Obviously we run the film in English before we do anything else--we
sit and watch it and take notes, and so on--but you can't always remember
every syllable. So, it's nice to switch the stem that you're monitoring.
Being that the delivery medium is nearly always a DA-88, we will lay
off onto a 6-track for the SR digital. Another DA-88 will have the clean
dialogues, and then usually on Tracks 7 and 8 they will have both the
LT/RT mix, which is the 6-track mix folded down through the DS4E, giving
you left total and right total; effectively Dolby Pro Logic. That is used
to make the stereo variable area track--the optical track--which is on
every single film as well as the Dolby digital. The idea here is that
if the Dolby digital gives out, then the CP65 monitoring device in the
cinema will automatically switch over to the SR sound--the optical sound--and
hopefully most of the audience won't notice any difference. It'll be 4-track
as opposed to 6-track, but at least it won't go completely silent, and
then if the digits sort themselves out it'll come back on. To be honest,
it is pretty good--The digi's are pretty reliable.
Q What do you use for monitoring?
JBLs in a cinema setup; eight surround speakers because of the EX, left-centre-right,
and a massive great sub. The subs have got about 1 kW pumping into them
because of the size of the rooms... You know, the rooms are 5.1, but they
are Dolby stereo surround EX, so there are left-centre-rights, and then
the surround is stereo--left surround and right surround--and that goes
into the EX decoder, which produces four channels; left-back and left-surround,
and right-back and right-surround. The decoder works in various different
modes depending on what's on the surround signal. I mean, if there's a
mono signal--in other words, if it's completely in phase--then it will
activate the two rear speakers and come out from directly behind you,
but if it's a stereo signal and there's lots of phase inconsistency, it
will come out of the two surround speakers at either side of you. It's
a good system. It works well, and it's a good way of basically getting
eight channels out of 5.1.
Q According to the different
foreign language versions, do you treat the various components differently?
Well, whatever the M&E is supplied as, that's just carried on. It
goes straight through. Obviously the dialogues aren't often panned around,
and so it's rare that they'll go to the rear, although funnily enough
they do in Rugrats in Paris. There is a shout where Chucky runs towards
the camera, and starts in the centre speaker and ends up in the rears.
However, usually they're just in the centre speaker.
Q While you're invariably trying
to duplicate the original, do you ever listen to the original and think,
'I don't really like what they've done'?
Oh yes. Quite often. I quite often sit there and think, 'Ooh, I wouldn't
have done that.' But that doesn't mean to say that we don't do it. We
still emulate what's on the print master
SOUND DESIGN
ARTICLES
|