Computers have always let me down. I’ve always been a few steps behind. It goes way back, especially with audio. Chris and I still sometimes use Cakewalk 7 to edit, well, he’s just got a Mac so that’s changed finally, but we would sit there cutting up audio destructively and manually stitching phrases, sometimes even syllables together. Once it was cut you couldn’t go back! Limited tracks, limited computer power, and so on.
The History of My Orchestral Sounds
I’ve wanted to write orchestral music properly since I was a kid. I used to use a BOSS MIDI box on the Clavinova. That’s where I got some experience, at least with composition, but it didn’t sound realistic.
Each Chalcedony album marks an upgrade in orchestral synthesising and music production in general. Chapter One actually used the BOSS box, which played on one computer, and was recorded upstairs into WAV on another. The computers had a different concept of time, so every few minutes we had to cut it and shift it back into beat. It was a nightmare.
Chapter Two upgraded to SF2 soundfonts, that ran in Cakewalk. Easier to work with and some were slightly more realistic. I went more crazy and blended more synth and organ sounds in that one, to mask up the fake orchestras.
The Chapter Three finally used VST plugins. The Edirol Orchestra (that many of us have dabbled with). Only certain sounds were decent, but it was till a massive upgrade. The computer couldn’t process more than a track at once in real time, so everything had to be WAV’d individually before anything was done to it. Expression didn’t exist, so volume had to be faded manually, which wasn’t realistic, but was better than nothing. Certain instruments were avoided because they didn’t sound good enough, altering the entire outcome of the flavour of the sound.
Towards the end of that, I got hold of the Garritan Steinway and the Garritan Instant Orchestra as a starter. Most of these sounds were good, but I’d already orchestrated 90 percent of the album.
Chapter Four was recorded at the same time, and is basically the other half of the third album, so some things have been replaced using the Instant Orchestra.
But this was all rock music. You get away with more. I needed something that could be as good as I could afford and achieve, so I got the Eastwest Hollywood Orchestra, and spent a lot of money on a new computer which was hopefully powerful enough to run it.
The Current Situation
It runs, and it works – finally – but still with some restrictions. It can’t handle too many instruments at once. The computer is glitchy and we haven’t yet found a cure, and the new Yamaha P155 Piano doesn’t seem to like the aspect of Time either. Everything I play is put into the future, so there’s basically a negative lag of about minus 150 miliseconds or something. Again, there seems to be no cure or explanation. Three different MIDI cables, several computers, and even a different Yamaha Piano and the results are only worse. In fact, one particular computer has a lag that gets worse over time (but the other way). If you wait an hour, you get over a minute delay when hitting the keys!
Then there’s the latency of the actual note hit, further confusing the formula. I have trained myself to hit the note before I want the sound to come out in some cases. Of course, it depends on the instrument. Each one behaves a different way.
The Granary Setup
My setup for The Granary was Cubase 9, running the Eastwest Orchestra and the Garritan Instant Orchestra, and Steinway. As time went on, I used more and more of the Hollywood Orchestra, and moved away from the Instant one which in comparison, isn’t actually very good, but it has good enough choirs and a harp.
There were a few Hollywood channels with main strings, and a brass string combo. A couple that I changed as a custom setup, I think a french horn, and a lush cello. The sound of the Granary was mainly about the Cello and the Piano.
I used the reverb that came with the orchestra there and then. I didn’t risk processing loads of midi after a song was done, as I’m still not a level where computers can be trusted with that, so each channel again, is WAV’d, and then moved to the bottom as the final bit.
A backup is made with a number, and then I do something that I would be shot for, that you should never do. I delete all the midi data. I don’t want to see it again, I’m done with it. Move onto the next bit, let it go. Keep the wavs – make sure they’re OK before deleting, and there’s always a backup somewhere. The only thing that’s NOT wav’d is the piano. The piano is wav’d as one film-length track at the very end.
The entire film is on the timeline with a video screen playing it. I can mute the sound if needed, or put it on to get the feel and timing with words and actions.
What I Was Going For in This Film
As before, it was mainly about the piano, and maybe the cello at second place. I would usually just hit record, watch the scene and play something. If it seemed half-decent, I’d study what I did, and work around that. Sometimes – a lot of the time in the ambient pieces I’d keep the first improvised take and leave it pure. The orchestra, if any, would be built around that. So a lot of The Granary soundtrack is one take wonder improv.
I wanted it to be quite ambient, and have a richness to it. I wanted it to be haunting with a hint of horror. The piano would be louder than the orchestra, so that it was in-your-face like you’re in the room with it. On the DCP I did a trick where I put a bit of the music into the surround mix, so that it would wrap around the audience more.
Order of Writing
I started with the introduction, the opening sequence, because I’d already come up with the idea on another piano. I recorded it piano only, then edited to the music. Later, I put that into the final timeline, although I had to stretch it because of a problem! I managed to get it though.
After that, the process is quite different. I pick a random point in the film. I poke around and try ideas until I play the scene I feel like doing. I then work on that sequence and finish it, until it’s wav’d and the midi’s are gone. I then pick another random point in the film, far away from the previous.
This gives me a feel for distributing equal energy across the timeline. Instead of doing it all from the beginning, I prefer to do it all over the place and then fill the gaps until I think there’s enough music. This is because if I change any technique, or add a new instrument, it will give the film more variety throughout. It stops the first stuff all being crap perhaps, and the second half being good. Or the other way round, if I get tired or run out of ideas, it stops the second half being crapper.
There is another, more important reason why I don’t score films (this and The Bastard Sword) in chronological order, and that is the development of themes.
I strongly believe in bringing back the themes and melody thing, which I feel is being taken away from modern cinema. So many films don’t really have melodies anymore. You don’t leave a film humming the song like you did in the old days, so I try to do the opposite. I want to make a film have so many themes and melodies, that there is literally no escape. The themes are constantly rearranged and manipulated throughout, sometimes even fused together, so that the aura of the film is distinctive. This is why The Granary has many themes and melodies. The Bastard Sword has even more.
I would skip to say, one of the climax scenes. Let’s take the Heather reading the note scene. One shot, big music, a big theme. I’d compose that theme there and then for the first time and go to town on it. Make it as big as possible. This means that now I know what Heather’s theme is.
So then, whenever I see Heather, or want to put a hint of her aura in the film before that moment, I know what the theme is. I can tone it down, make it subtle, or just play a bit of it. Fuse it together with something else. Hopefully this means that subconsciously, when the audience get to the climax of that theme towards the end, they’ll get the full hit of what the theme itself was building up towards.
It works both ways naturally, I still sometimes come up with a theme in the beginning, and then beef it up later, but it just depends on the mood. I will generally complete between a minute and 9 minutes of music a day. Sometimes none, sometimes more if it’s ambient piano stuff.
I see themes as Transformers. In Transformers, you’d take a group, say the Dinobots. They’re all dinosaur in theme, similar palettes. The toys had a metallic and transparency to them. You knew it was a Dinobot. But if you had all of them, you could (at least, I think you could with these ones) put them altogether to build one massive robot. That’s how I visualise it. It’s all a part of a bigger picture, and when you see the bigger picture, you realise that guy was the arm, that guy was the leg, and so on.
Mixing and Mastering
Unlike the rock music, not much of this is actually needed. The process of making orchestra music is much faster. I don’t have to mic up drum kits, or get people to play guitars. I can just sit and get it down. Most of all, I don’t have to write lyrics, which are the hardest thing of all!
The mixing is usually subtle. I would tweak the WAV’d tracks as I listened to them throughout the entire process, so it’s usually all in place to begin with. I might just bring up the volume of that bit there etc. I would rarely, maybe never automate the tracks. It wasn’t needed, because I have control of each WAV file independently, and it was performed with the expression required in the first place.
When the film is done, I’d look at relative volume of different pieces. Make sure they are pretty much consistent with each other, so it doesn’t sound like someone’s turned the volume up or down between songs.
I’d then master it – or whatever the cheap excuse for mastering is called. Either by exporting the entire project and then re-importing it for the master, or by actually doing there and then before the export.
The technique I’ve used on the last few projects such as Plagan, Deltastealth and other fun projects is to have everything ported to a master channel before going to the master bus. The master channel has the compression and EQ on it, and the master bus has a limiter of -0.1 or so DB. This means that no matter what volume you adjust the master as, it will never clip. It’s then just a question of EQ, very light compression (this is orchestral stuff!) and adjusting that master channel to decide how many dynamics I want. If I have it too high, it’ll all be limited lots. If I have it too low, I waste headroom for dynamics. Loudness wars can’t be won in orchestral music, but it’s still good to find problems using clipping. Sometimes a frequency you barely even notice can be the culprit for having your songs sound too quiet.
The EQ will usually be very subtle. No narrow cuts. Just a smooth dip in the vocal area. I can’t remember where it was, somewhere between 1.25k and 5k maybe. Sometimes a little more bass, and sometimes a little more high end, to make those strings sound airy. That seems to be the sound I hear on all the great composers scores. It also allows it to fit into the film with the sound more.
This WAV file will then go into premiere or back into cubase, where it WILL be automated against dialogue and sound. The original mix will be what you hear on the OST album, but then it will be changed to suit the film. Otherwise, dialogue might be drowned out and so on.
But all that sound mixing for the film in general is another story!
Thank you for reading!