Paul Henry Smith |
home,page,page-id-154,page-template-default,ajax_fade,page_not_loaded,,vertical_menu_enabled,qode-title-hidden,side_area_uncovered_from_content,qode-theme-ver-9.1.3,wpb-js-composer js-comp-ver-4.9.2,vc_responsive

Twenty years ago I ran into Marvin Minsky at the MIT AI lab, where I was working on an expert system to generate counterpoint.  Dr. Minsky had a MIDI cable in his pocket and wondered if I needed it for anything.  I had no idea what he was talking about.  But he had heard I was a musician, and thought I’d have some use for a cable to connect digital instruments together.  I explained that work I was doing with Jonathan Amsterdam was solely on the computer (a Symbolics Lisp Machine), and he said I should go over the Media Lab, find Mike Hawley, and check out the MIDI-controlled Boesndorfer piano he was working with … which I immediately did!

I wanted to find out if musical instrument technology was good enough to be able to make music on a high level … or, to be clear, at a level I’m satisfied with. My initial efforts on that piano convinced me that this is possible, and that it’s just a matter of putting in considerable time and effort to make it work. I hoped, of course, that spending thirty hours getting a reasonably musical, but stilted performance of a three-minute Mozart sonata was not the norm.  I hoped this was a steep learning curve, much like practicing for thousands of hours on an acoustic instrument, and that eventually, I could produce highly expressive music with digital instruments in much less time.

At the Media Lab we also had an early digital orchestra system: a lot of hardware-based sound samples, MIDI keyboard and a couple of computers.  Mike Hawley had made stunning software to edit musical performances.  Really, it was very powerful, beautiful and easy to use.  Just one example: each note on the Boesndorfer could have something like 8,000 different loudness levels to choose from.  By comparison, the MIDI standard offered only 127.  So, we were way, way beyond the limits of technology.  If anything, the technology was far too subtle.  I mean, I could not detect a change in loudness on that piano until I had moved the dial about 800 points!  So, the resolution and potential of the software was not limiting expressive possibilities.

Yet, after months of grappling with digital music making, I concluded that the process was so tedious, the results so bad, and the instruments so inflexible and limited that I’d probably have a better shot at musical success by becoming a business man, making millions of dollars, and then spending that on building an orchestra. Really, it was that bad. (NOTE: I actually did try the latter approach, but that’s another story.)

The problem was that the software expected me to have explicit awareness of how the sounds should be shaped and structured (which was not too hard), and by what explicit degree (which was nearly impossible).  If I would only tell the computer exactly how to play every note, it would do it.  But I could not tell it.  I thought I could.  I spent hours tweaking the voicings of chords so the notes blended beautifully into a well-balanced, unified sound.  This involved repeatedly playing the chord with minute changes from one iteration to the next.  It also involved annoying Mike immensely.  But even though I could finesse details with extreme precision, something was missing.  The larger, over-arching ebb and flow was simply not there.  And the software didn’t have a good way to address that.

It seemed at the time that my life would be long over before digital instruments would be viable for performing orchestral music at a high level. But, fifteen years later I checked in on the field to see what progress had been made. I was shocked.

I was wrong

No longer were the machines so expensive and limited. Now, with cheap disk space and fast computers, a new “brute force” approach was starting to show promise. What would’ve cost $250,000 in 1989 could be done for $5,000. This brute force approach consisted of trying to sample all possible orchestral sounds, including the sounds between sounds, and make them instantly available at all times. This approach won’t be the one that prevails as digital instruments evolve, but at that time, around 2003, it had demonstrated its unequivocal superiority: it could be played like an instrument.

The proof of the pudding is in the eating, as they say. And the results of this sample-based, brute force approach were as musical as I had ever heard coming from digital orchestra instruments. But more important than those results was the recognition of the potential these instruments had. When I looked into how the instruments actually worked, I knew that I would be able to use them for musical ends eventually…within my lifetime, even.

Three years later the Nintendo Wii came out.  I realized that I could use the wireless controllers from that system to do the musical shaping, in real time, that I could not do at the Media Lab.  The controller cost $40.  With the ability to shape the music as it flowed came the possibility of doing so in a real acoustic spaces suited to music.  Now I could play these instruments live in concert halls, unlocking the potential to create a musical experience on a par with that provided by an acoustic (real) orchestra.

Which brings us to now. The digital instruments are still limited, but they’ve gotten much better than they were even in 2009, and they’re still improving. They are improving faster than acoustic instruments, that’s for sure. My investment in learning how to play them, how to master them, is paying off. And within the next ten years there is no question that I will be able to follow my musical imagination anywhere it leads with more suppleness, expression and ease than the current generation of digital musical instruments allows.

So, instead of seeing digital instruments with all their current limitations and saying “they don’t work very well,” I see them with all their potential and say “let’s make them work as musically as possible!”

I always knew that having many expressive possibilities and having an easy way to access those possibilities would be the most important aspects of digital instruments. Still, I am a bit surprised at how easy it has been to use off-the-shelf hardware and software to do things like conduct the orchestra in real time in a concert hall. But I’m also disappointed in the utter lack of progress in the software used for playing music. Twenty years ago I used software that represented sound as piano-roll-like marks on a screen. One could change the properties of these sounds with crude, mouse-based tools or MIDI controllers.

The rosy future

That’s still how the software works today. But it’s very limiting to musical imaginations. Let’s say, like Debussy, we want a note to morph from a trumpet to an oboe sound. In an orchestra, sensitive players know that’s the goal and the two players can make a seamless transition. And that’s great. But in a digital orchestra there is no need to have “trumpet” and “oboe” as separate entities. After all, they are not separate physical systems on the computer. How much more expressive and delightful it would be to play a note and perhaps by just moving your hand change a note from “oboe-y” to “trumpet-y.” Well that sort of thing has existed for years in music synthesis, but is still not yet a part of sample-based instruments. (Although some people are assiduously working on new instrumental models that will enable these sorts of expressions. And with a bit of tweaking, the sample-based instruments actually can be made to perform in this way.)

Another problem with digital instruments is that we still lack the basic features we take for granted in word processing programs. We can’t, for example, search for all d-minor chords and make them d-major. We can’t apply a transformation (a crescendo, for example) to all instances of a melodic fragment. We can’t filter our view of a musical score to show only certain melodies or rhythms, regardless of what instrument they appear in. We can’t adjust the intonation of a chord based on its harmonic context (well, we actually can do that, but it’s not catching on during the current fascination with musical performance fascism, which is another story).

We are still too steeped in a musical model that sees everything in terms of “tracks” and “instruments” to make progress in these areas.  No one is even thinking about this.  There is still no music software that can deal with melodies, harmonies, and rhythms the way Photoshop can deal with opacity, shadows and highlights.

We have a long way to go, but the potential is there for digital instruments to be as expressive as acoustic instruments when performing orchestral music.  And when they do eventually incorporate knowledge of structures and relationships like tuning, rhythm, harmony and melody, they’ll be even more musically powerful than acoustic instruments.

Listen to the world’s first performance of a Beethoven symphony live, using only a digital orchestra: Beethoven – Symphony No. 1 (first movement) Recorded May 20, 2009, at Holy Name Church, Boston, Massachusetts.