Jaguar XK150 Recording

Last Sunday (27th April) was ‘National Drive It Day‘ in the UK where everyone who owns a classic car tries to take it out for a drive. Luckily, I had access to this beautiful 1960 Jaguar XK150 DHC and decided it was the perfect opportunity to do some recording.

Jaguar XK150 DHC

Jaguar XK150 DHC

For this recording, I wanted to try out some positions for the exhaust and engine and see what results I could get. I got hold of some DPA 4061s (these are the low sensitivity version so can handle high SPL sources) and was thinking about how I was going to attach to them to car, when I came across these little gadgets:

DPA Magnet Mount

DPA Magnet Mount

The picture above shows a ‘DPA Magnet Mount’ which grips the cable of the lavalier and allows you to attach it to magnetic surfaces. I was pretty sceptical about how well they would work when attached to the back of a car travelling at 60MPH, but my reservations were unfounded and they worked astonishinigly well.

I started with the exhaust mic, and decided that the best thing to do was to attach it to the rear chrome bumper. I didn’t want to place it directly above the exhaust, in case it was in the path of fumes (I’d just got these mics so was still a little precious over them), so I situated it within the curve of the overrider shown below.

Close-up of exhaust mic placement.

Close-up of exhaust mic placement.

The cable entered the car through the boot, which I was quite anxious about doing. The boot had a thick rubber sealing tube around the inside edge to provide a flush fit with the body of the car; it looked like it would be enough to stop the thin mic cable from being cut, but I wanted to be absolutely sure so I found some thicker foam and wrapped the cable in that. I then pushed the boot gently over the foam until it closed to see what kind of indentation it made on the foam. Luckily, there was little to no sign of extreme pressure, so I was happy that the mic cable would remain undamaged.

The boot of the car has an opening which allows you to access it from the backseat. The cable was passed through this opening and trailed through to the passenger seat so I could plug it in to the recorder.

Cable passed through the boot compartment

Cable passed through the boot compartment

The placement of the engine mic was more forced than decided. I ran the engine to let it get hot and had a feel for where the mic would be safest. The engine block itself was obviously roasting so I had to avoid that area as much as I could. I found a metal block in the corner of the engine bay which would be perfect for the mic, and there was nothing in the vicinity getting too hot, so I decided to give it a go. I initially place the mic on top of the block, but due to the omni-directional nature of the mic, this placement wasn’t ideal as it was too close to the underside of the bonnet and the top of the block giving little direct sound from the engine. To rectify this, I kept the magnetic mount on the top of the block, but hung the mic down further into the engine bay to get more direct sound from the engine.

WP_20140427_12_22_00_Pro

Engine mic hanging over the edge

WP_20140427_12_21_41_Pro

Engine bay with mic placement to the right

WP_20140427_12_21_54_Pro

Engine mic placement

With both mics in place, it was time to record. It took a while to get the levels right as I wasn’t anticipating quite how loud the car would be at these kind of distances. My rig for this record session consisted of a Sound Devices Mix Pre (leant to me by good friend and recording buddy Matt Meachem) as a front end, which was routed in to a Fostex FR2-LE. The volume pots on the Mix Pre had to be so low to prevent clipping and the record level pot on the FR2-LE was also lower than expected. However, once I set the levels right I had a pretty smooth recording run.

Recording rig in the passenger seat

Recording rig in the passenger seat

I recorded two drives for 45 minutes on the 27th and also two, one hour drives this bank holiday weekend (4th and 5th May). I tried different mic positions over all three drives, and am going to listen to and compare them all to ascertain the best sounding positions, or a potential location for another mic. I’m planning on doing a full record to distribute via Arrowhead Audio and will use these as test recordings which will sit in the personal library.

Unfortunately, one of my Rycote Lavalier Windjammers blew off (the one attached to the exhaust) mid-drive on Monday 5th May, which is a shame. I’m going to have to think of a way around that and listen to the recordings to see if it’s possible to tell where and why it blew off so it doesn’t happen again. In the meantime, I’ll be ordering some new ones, or if anyone knows of any alternatives that may hold a bit better, I’d love to hear about them.

 

Here are some clips of the recording taken on the first day (27th April); One is solely the engine mic, one is solely the exhaust mic and the third is a mix of both.

Using the Oculus Rift’s head tracking to aid immersive audio.

This blog post has been drafted for a while now, but with the recent news that Facebook has bought Oculus for $2bn, it seemed like the perfect time to finish it off and publish it.

For those who are unfamiliar with the Oculus Rift, it is a headset which uses a single screen to display two images; each image is directed to one eye using two lenses, and so a 3D effect is created.  It uses head tracking, so that if you look left while wearing the headset, the effect can be replicated in the interactive media; the parameters within a game or piece of media are often mapped to the tilt and pan of the camera, using the orientation of your head to dictate the view.  At this point, the Oculus Rift hardware is solely visual, however if this sort of technology can be used to create an immersive visual experience for the user, then surely similar principles can be applied to provide fully immersive audio?

http://www.popularmechanics.com/cm/popularmechanics/images/Y3/oculus-rift-0113-de.jpg
The Oculus Rift

Head tracking for audio works in a very similar way to those outlined above.  When you listen to something using conventional headphones, no matter how you move your head, the sound remains the same; this is because the soundfield is fixed.  In other words, if you look to the left, the auditory field follows.  However, when head tracking is introduced, the sound origins are fixed within a 3D space.

an example of head-tracking

In order to understand how this effect might work, it’s important to understand how human hearing localises sound sources.  There are two main systems we use to localise a sound’s origin: Inter-aural Time Difference (ITD) and Inter-aural Level Difference (ILD).  ITD is a measure of how much earlier a sound arrives at one ear before the other; for example, if a sound arrives at the left ear first, we assume the sound originates from the left.  ILD is similar, but identifies the difference in amplitude from one ear to the other; if a sound is louder in the right ear, it is assumed that the sound originates from the right.  The brain uses these differences in time and amplitude to pin-point the origin of a sound, the processes occurring so quickly that they don’t even need to be thought about.  In order to decide where a sound is coming from, we naturally move our heads in order to narrow down the sound’s source.  If the sound hits the right ear first, and is louder, then it’s source should originate to the right; moving the head will allow the listener to determine the precise location, because it will change the sound/time information at each ear. Currently, when experiencing audio in interactive media, the soundfield is static and not linked to our head movements as it would be in a natural environment; it is instead mapped to the input of a controller. This pulls us out of the experience rather than immersing us further because the soundfield is mapped to an unnatural stimulus: the controller inputs, rather than the natural stimulus this change is attributed to: the movements of our heads. If the movements of our heads are represented by changes in the media’s soundfield it will be perceived as being more realistic, therefore, aiding immersion.

By using this theory, soundscapes could have their orientation parameters mapped to the tilt/pan of the character’s view, which in turn is being controlled by the head of the user’s tilt and pan, thus allowing for a truly immersive experience for the user.  There are, however, limitations to the development of audio tracking, the main one being that the consumer could be listening to the audio through any number of means: headphones, stereo, 5.1 to name just a few.  When using headphones, modelling of the Head Related Transfer Function (HRTF) and emulating binaural systems, needs to be considered, and would probably aid the immersion. The downfall of this would be the variety of parameters when using HRTF models – everyone’s HRTF is different and choosing models for each individual would not be efficient or possible for the consumer. This would suggest a universal model being decided upon for the consumer, which wouldn’t work perfectly for everyone.

Using the Oculus Rift’s head tracking is essential to the future success of immersive media systems. Spatial audio needs to be taken into account for the next iteration of the Rift, not least by the developers producing content for it and is truly the next step in building upon the immersion that the Oculus Rift currently offers. Hopefully the new Facebook acquisition of Oculus will see a new iteration of the hardware which will take spatial audio into account.

For further information, this youtube video showing a spatial audio engine in Unity 3D should be the first port of call and perfectly demonstrates the benefits spatial audio could provide to the Oculus Rift:
https://www.youtube.com/watch?v=2dzsVjn8hNc

—————
Figure 1 – http://www.popularmechanics.com/technology/digital/gaming/eyeballs-on-with-the-oculus-rift-14977992
Figure 2 – http://www.sony.net/Products/vpt/tech/

Audio Interfaces suitable for 5.1 Surround Sound Monitoring

My Digi 003R is on it’s last legs, so I’ve searched around for some information on audio interfaces that are suitable for 5.1 surround monitoring. I couldn’t find a definitive source where all the information was contained on one page so I thought I’d make one for anyone interested (although it’s quite subjective!).

My criteria for a surround sound interface are:
- It needs to be able to group the outputs to gain/attenuate the whole 5.1 system from the front panel volume knob.
- At least 96k/24-Bit sample rate/bit depth
- Good D/A conversion (on a see-saw with price)
- Cost effective
- Well built, rugged unit
- At least 2 mic pres for SFX/Dial recording
- Aesthetically pleasing (although, this is the least important requirement)
- Internal DSP isn’t sought after, and probably wouldn’t be used so is not desirable

Picture 6Avid Omni – £2000 (£3800 w/ HD Native Card)
Pros – Great Features, Can group outputs
Cons - Very expensive, Requires a PTHD card, Low resolution meters
Thoughts –
I really like the look of the Omni, and I’ve heard it sounds pretty good too. It has a wealth of features and fits in with the PT ecosystem nicely. The only thing that puts me off is the price. I don’t own PTHD (I own PT10 + CPTK), so to get the base requirements to use the Omni, we’re looking at just under £4k which is far too much, and only brings features I don’t need to the table (Low Latency input + HD Software, most of which the CPTK provides).

Image

Universal Audio Apollo Duo – £1600
Pros - 4 mic pres, Thunderbolt, Excellent supporting softwareHigh resolution meters
Cons - Built in DSP pushing the cost up, Can’t group outputs through supporting software
Thoughts: One of my major gripes with the 003 is the lack of being able to group my outputs. This means that my volume knob only controls outputs 1-2 with the other outputs (2-6) not being affected by it. Whilst the Apollo features supporting software, it seems like it doesn’t allow the ability to group the outputs, which unfortunately for me is a deal breaker. The unit does look absolutely gorgeous though and I’ve heard excellent things about the sound of it. It packs a UAD DSP card in there too so you can run your favourite plugs externally, without taxing your system.

Image

RME Fireface UFX – £1600
Pros – 4 mic pres, Supporting software, Rock solid construction, Very high resolution meters, Can Group Outputs
Cons – Built in DSP pushing the cost up, Uninspired Design
Thoughts - I’ve heard excellent things about RME, but never come in contact with any of their products personally. Apparently RME make super rugged interfaces that rarely go wrong, but their utilitarian design principles have always put me off. I originally included this into my list, rather than the Fireface 800/UCX (priced at £980 and £800 respectively), because I had the impression that they didn’t utilize the TotalMix software that the UFX does. This is untrue, so the cheaper counterparts without DSP are definitely under consideration now. Although, there have been varying reports of each units pros and cons in regards to ADC, DAC, and mic pres so I’m currently unsure as to which is better or worse.

Picture 5

Apogee Ensemble – £1400
ProsAble to group outputs, Cheap and readily available on the used market
Cons -
Thoughts -
Apparently excellent on the D/A stage, however, there are also some reports of it sounding ‘thin’. It contains 8 fantastic mic pres which is nice, but would largely go unused for my purposes. A positive, is that they’re quite abundant on the used market with one coming up every few weeks which is something to look out for. The feature count seems to be slim, especially compared to the Omni/Apollo, but my main focus is for 5.1 monitoring, so this isn’t a problem. I quite like the idea of this unit, and seems to be a good balance between cost and feature set. (Since writing this blog post, I have found out that Apogee doesn’t support the Ensemble anymore and is is no longer in production)

Picture 9

Metric Halo Mobile IO 2882 – £1250
Pros – 4 mic pres, Supporting software, Able to group outputs
Cons – 10 year old unit
Thoughts - I don’t think I’ve ever heard a bad word about Metric Halo gear; although this unit is over 10 years old,  it’s still held in such high regard today. I’m really interested in it based on MH‘s fantastic reputation and attention to detail. It’s also priced nicely for the feature set and oozes quality. Definitely one to keep my eye on.

Picture 3

MBox 3 Pro – £600
Pros - Able to group outputs, Inexpensive, Apparently excellent sound quality
Cons
-
Thoughts – After just coming from a low end Digi (or ‘Avid’ now) interface, this option doesn’t fill me with joy. The feature set however, is absolutely spot on. A couple of mic pres, a group output section – and it obviously fits within the Pro Tools ecosystem perfectly. My only problem is past experience and I think I want something with a bit more quality than an Mbox (although the convertors and pres have been upgraded dramatically from what I can gather). 

Picture 7

Echo Audiofire 8 – £460
Pros - Very Inexpensive for the feature set
Cons – Cheap-looking build quality, No group outputs, No metering or visual diplays
Thoughts –
Mixed reports on driver stability and sound quality are conflicting my opinions of this unit. I’ve heard some excellent things about it, and some not so excellent things. The general consensus is that it’s worse than something like the RME Fireface 800, which is double the price, but it’s not double as worse. Something to consider if the budget gets dented.

Picture 4

Focusrite Saffire Pro 40 – £400
Pros - Able to group Outputs, Excellent supporting software, Very inexpensive
Cons –
Too many mic pres (although price isn’t affected, so not really a problem)
Thoughts – This unit seems too cheap for it’s feature set and, whilst I really like focusrite gear, I can’t believe that £400 is going to give me the best DAC I can get. Cramming in 8 mic pre’s for this price seems unbelievable, and is clearly great value for what you get, but the quality of the components worries me.

Conclusion
After spending a lot of time researching these boxes, I’m starting to get a feel for their strengths and weaknesses and have a better picture of which ones I’m drawn to.

The Focusrite Saffire Pro 40, Echo Audiofire 8 and Avid Mbox Pro 3 are all but discarded from my selection. Whilst they offer an attractive feature set for their prices, I really want to upgrade rather than just replace my oo3R – and I don’t think these will do that.

The Avid Omni, Universal Audio Apollo Duo and the RME UFX are the complete opposite. They’re a bit too pricey for me and have feature sets far beyond my needs. However, I’m sure they would provide the upgrade I’m seeking. The UA Apollo has to be discounted for it’s lack of group output functionality which seems criminal considering it has supporting software in place. The RME UFX seems pretty perfect, but has too much functionality I wouldn’t use, so I can’t justify the price tag, especially when the Fireface 800/UCX offer what I want for less. The Avid Omni is just massively too expensive when factored in with an HD Native card. If it was a standalone for around £1500, I’d be interested, but as it stands, it’s just too much.

The Apogee Ensemble, Metric Halo 2882 and RME 800/UCX are my current favourites. The Apogee is apparently great and has a great presence on the second hand market, meaning it’ll be cheaper. The Metric Halo 2882 gets amazing reviews from everyone – in fact, I don’t think I read a negative one at all. The RME 800/UCX both have great feature sets and prices – but some more research needs to be done into their differences.

Prepping your Film/TV Show for the Audio Team

This is a blog post aimed at Directors and Video Editors (or both if you’re super talented!) on how to prepare your film/TV show’s audio to be ready for the Audio Department. It’s a basic list of what you’ll need to get ready, and most importantly, why it’s useful to do.

1. Organize your Session

Organizing your session is extremely important when delivering your work to Audio. The best way to organize the audio you’ve collated in your session is by these three types: Dialogue/Sync, FX and Music.
These groups are the bread and butter of the audio dept. and you should have seperate audio tracks for each of them in your FCP/Avid timeline. For example, A1/A2 would contain solely Dial/Sync, A3/A4 would contain FX and A5/A6 would contain Music. This will save the Audio Dept. a LOT of time and they will be extremely grateful for it.

2. Mono/Stereo Audio

The difference between Mono/Stereo Audio is often mis-interpretated but quite straightforward in reality. The common misconception is that mono = 1 channel, stereo = 2 channels – right? Well, almost.

A mono channel, is a single channel of audio, and a stereo channel is two channels of audio playing simultaneously. But, what if both the channels in a stereo track are playing the same thing? This is called dual-mono and is undesirable. Having the same information on 2-channels is pointless, so, if it’s mono – it’s best to leave it that way, as a single channel. If it’s dual mono, then blow away either leg/channel (left or right) so you’re left with a single mono channel.

Deciding what is ‘true’ stereo (a stereo channel which contains different signals on the two legs), and what is dual-mono (a stereo channel which contains the same audio on both tracks) is an important task, so if you’re not sure, it’s best to leave it stereo. Blowing away the right/left leg of an actual stereo track could be disastrous.

A quick and easy method to check if both legs are the same (dual-mono), is to reverse the phase of one of them. If you’re using FCP, then you can download this AU plug for free (http://www.sonalksis.com/freeg.htm) which will flip the phase for you (Phase is represented by the symbol ø). If both the legs are the same (dual-mono), then they will cancel each other out and you will hear absolutely nothing (complete silence) – if they’re different (stereo) then you will hear the same sound but it may sound different, often described as sounding ‘hollow’ or ‘phasey’. (If you do this, remember to disengage the plug-in before continuing!)

Remember. Stereo = Good. Dual mono = Bad.

3. Don’t get too attached!

When tracklaying temp SFX/Music, it’s extremely easy to get so attached to them, that when they’re changed in the audio process, the first thought is often one of dislike. This causes a sensation of the audio not sounding ‘right’ or ‘correct’. Because of the time spent hearing the temp FX/Music in the offline edit, anything else sounds completely wrong. It’s important to remember this when tracklaying temp FX and Music and to stay objective when listening to the sound re-design. If there’s a sound that really has to be in your project, then let the audio team know – chances are it’ll go in, but there are times when the quality is not good enough so an alternative must be used.

4. Audio EDL / OMF

This is potentially the most important point. If you don’t know how to export these in your Video Editor program, then stop reading this post and go and learn as soon as possible!

EDL stands for Edit Decision List and shows the original TC (timecode) in/out points and the current timeline position TC in/out points of audio/video. When exporting for the Audio Dept. make sure you them an EDL containing only audio information, as video fades/effects aren’t useful and just clutter up the important stuff.

If you’re sending a pre-final cut of your project, then this is super important – any changes that are made need to be referenced using EDLs. Two EDLs will suffice (pre and a post edit), however, another useful thing to provide is a change-list EDL. This provides the TCs of the original location of the picture/audio followed by the new position of the picture/audio and gives the audio team the ability to perform a conform of their pre-edit session to post-edit with relative ease.

OMFs / AAFs are a way of exporting your session on a 1:1 basis. All of your regions/clips remain intact, along with their positional infomation on the timeline, automation data and fades (depending on the settings used). This will most likely be the first port of call for the audio team. It’s a quick way to see how complete the audio is, what needs doing to it and what your rough idea for the audio aesthetic is.

5. BITC

BITC (Burnt In Time Code) is audio’s best friend. It’s a supremely useful tool when spotting SFX to specific timecodes. It also gives the ability to double check the framerate of the video with the session frame rate. Nothing gives you piece of mind more than seeing the BITC on the picture and your Timecode Window running exactly in sync. It’s also extremely handy when swapping notes between the audio team and yourself, the director/editor. Saying “Can you do ‘x’ to the SFX at 10:24:04:02″ is a much more efficient way of communicating than “Can you change the SFX where ‘lead character a’ opens the door in scene 4. You know, the bit where he goes into the house. No, further back, further back – no, too far…” etc.

Make sure your picture has BITC.

6. Audio Rushes

Make sure that you send all the audio rushes, not just the ones which have been used or the ones which are deemed useful. There are so many hidden gems in apparently ‘useless’ rushes that never get to be used because they’re not sent to the audio team. Extra rushes are absolutely riddled with alt takes/room tone/accidental sound fx and are just begging to be used.

This applies even more-so if you’re sending your project to someone who is editing dialogue. They really can work magic with the dialogue when given just a few alternative takes for lines, so make sure you include them all.

If you follow these steps, you’ll ensure that your audio team has more time to concentrate on making truly great audio, rather than spending the time fixing or finding bits that are vital to the process. Not only that, but a torrent of gratitude will be showered upon you for making their jobs easier and for sending well documented, easy to decipher material.

3D or not 3D?

I’m going to start off by saying that I used to hate 3D. I’d purposfully avoid it every time I had the choice between seeing a film in 3D or 2D. This wasn’t a choice I made because it was controversial or I wanted to be awkward, but because 3D simply didn’t work.

Before yesterday, my 3D experiences had been plagued with problems which included: crosstalk between the two images, eye strain, headaches, shiny metallic looking surfaces and being unable to focus on certain objects (generally closer ones). These problems all contributed to one thing – pulling me out of the film rather than immersing me further. It was safe to say that at this point, I would never consider adopting 3D. But as they say, never say never.

I was lucky enough to be invited to the Dolby Christmas screening of Puss in Boots 3D at the Empire Theatre in Leicester Square, London. I was expecting another experience full of the problems I listed earlier, but wanted to attend anyway because it was a pretty special event. To my suprise, none of these problems occurred during the WHOLE of the film. I was completely immersed and thoroughly enjoyed the 3D aspect of it. For the first time, I thought that the 3D helped immerse me into the film.

I was wondering how my experiences with 3D could variate so drastically, and was lucky enough to have a chat with an ex-Dolby Film Sound Consultant who explained some pitfalls of 3D. The main and most common problem, is calibration. Whilst it’s quite easy to calibrate the two (or four) projectors to somewhere close to where it should be, it takes a long time to calibrate the projectors perfectly. In fact, for this Dolby Screening, they had 2 Dolby employees setting up and calibrating both the audio and video for two days before the screening (which clearly made a lot of difference).

Another difference is the type of 3D used. I’m not too knowledgeable on the different types of 3D, however, I’m of the understanding that my bad experiences have all been using the Real3D glasses system which use Poliarized 3D glasses and my good experience was using Infitec (Interference Filter Technology) Glasses. I’m not sure on exactly how these systems differ, but considering the Real3D glasses are considered disposable whilst the Infitec/Dolby glasses aren’t, I’m under the assumption that the Infitec technology is the better of the two.

I believe that the main reason for my dissapointing experiences has been down to calibration rather than the tech used, which is a total shame that I’ve had to wait this long to see 3D properly. If the cinemas spent some extra time calibrating their rooms to produce picture and audio in the way they were intended to be viewed/heard then it would be a much more enjoyable experience and would certainly encourage me to see more 3D films. In fact, I’ve got a bit of a gripe with cinemas on the audio side of things as well which I’ll run through on my next blog post.

I’d love to hear your views on 3D, whether you like it or not and whether you agree/disagree with me, so please leave a comment!

Helicopter Recording

So a few months back I wanted to try out my Rode Blimp and NTG-3 so went to my local (small) airfield. I wasn’t really expecting to find much, but what I did come back with was a load of Helicopter recordings. I think someone was having a lesson so they kept flying over the airfield which led to some great fly-by recordings.

I also almost caught a plane taking off right over me, but a member of the public started chatting to me about what I was doing and whether any planes had taken off today (I think he was a plane spotter). Luckily I managed to stop him talking just as the plane started it’s taxi toward me although, not so luckily, the guy got back in his car and slammed his door just as the plane was overhead. At this point, there was nothing I could do but hope for a chance at another plane, which unfortunately, never came.

Here’s a quick sample of a helicopter flying past me overhead:

Motorway Recording

After finally fitting a stereo pair of NT5s into my blimp, I decided it needed to be tested, so I drove to the local motorway bridge that runs over the M40 to record some car passes from above.

I ended up sitting on the top of the bridge with the X-Y pair pointing down towards the road, which meant that the left facing microphone was directed towards the outgoing traffic and the right facing microphone was directed towards the oncoming traffic. I left the recording running for around 45 minutes so I could get a decent amount of audio. I also managed to get some car horns as some people who drove by saw the deadcat so naturally had to cause a ruckus! Although, this worked to my advantage and gave me some interesting sounds.

Here is a short sample of the recording:

I went out to get some general driving atmos of a motorway, and although it wasn’t my intention, I ended up getting some cool sounding horns as well (unfortunately, not in the preview clip!)

Follow

Get every new post delivered to your Inbox.