Reformatted from Avant Magazine, Issue 7, Summer, 1998, page 12
Words by Richard Hemmings

Will computers ever write music to rival that written by humans? Anyone with a classical tradition will tell you ‘no’. Dead persons music’s where it’s at, right? All this modern cacophony. Where’s the passion and the feeling. Where’s the tune…where’s the four inches of dust on the top of the manuscript coupled with a dank atmosphere to rival any mausoleum. God, I’m sick of hearing that from people who should know better, but the fact remains that this computer music lark is terribly difficult to comprehend because, as with all radical concepts about music, it makes us question the foundations of what music is.

Up until the early Twentieth Century, music was analysed using a kind of language more flowery than Kew Gardens. Academics lead the way believing the more poetic or descriptive their writings on music, the deeper their understanding. Hence, we read fairly ambiguous statements about the flute and piccolo playing gaily like woodland birds bathing in meadow springs, when, suddenly, as the cello plays its mournful motif, black clouds cover the sky, evoking a dark, descending gloom to scare off the flute and piccolo birds and make way for the storm that IS the timpani roll. Despite its ability to tell us obvious things about the music, the mood and what the sounds we hear suggest through association and cliché, as a method of musical analysis it actually tells us very little. When music began to be composed using the cold, calculated world of maths new analysis systems were developed along side. These were often maths based themselves and had a dramatic affect on peoples concepts of modern music. Musical structures of great complexity and originality, explained by formula in some instances, could dismiss the image of the humble composer seated at his up-right, waiting for God given inspiration. There was no room for analysis based upon peaceful spring meadows and angry storms. Music was seduced by science in the same way it had been for years by religion. Now, at the end of the Twentieth Century, we are faced with another shift in musical analysis, that belonging to computer music.

There are many fields in computer music, but for arguments sake I’ll divide them into three: A) Electroacoustic and Synthesis B) MIDI Sequencer and MIDI Encoder programs and C) Fractal and Algorithmic. Of these, ‘A’ is more or less concerned with sound generation/manipulation, ‘B’ tends to be aimed at performance or interactive performance/composition and ‘C’ deals with the actual generation of a complete piece of music from maths. Of the three, ‘C’ is the hardest for most people to deal with emotionally because the romanticised picture of the humble composer is lost for good. In 200 years time it’s doubtful we’ll be getting excited about the Macintosh 7300’s limited memory in the same way we do about Beethoven’s deafness and flatulence. Computers at present don’t rank too highly on the personality scale; idiosyncratic bugs aside. So what’s the appeal?

A computer is a box, usually in beige or cream, with a keyboard, a mouse, a monitor and a hole for sticking disks. Some people think computers are clever but they’re not really. At present they can only do what their programmers tell them to do (some are programmed to learn through trail and error but creative thought is another issue altogether). In other words, behind every computerised marvel there’s a smug programmer. Unless the user is familiar with a programming language, it is more often the case they will utilise a software application of some kind, designed by a programmer to make the computer more accessible (user friendly). Applications such as Word, Photoshop and Cubase remove the need for programming skills allowing the user to concentrate on the task in hand, i.e., writing a letter, colour enhancing a scan or transposing a flute. Take my sequencer as an example; it can perform certain compositional tasks. These tasks fit within the criteria determined by the software programmer as to what the main processes involved in composition are. The simplest building block in a piece of sequenced music is a note. Here, a note is defined by various parameters, most simply, pitch and velocity (how fast a key is struck). To comply with the MIDI (Musical Instrument Digital Interface) standard, each parameter has a numerical value of between 0 and 127. Once note duration is added, determined by how long a key is held down, a sequencer has all the information it needs to sound a note from beginning to end. To create compositions, this note information has to be placed within a linear time scale along side other notes, unless you’re feeling particularly minimalist at the time. The sequencer then allows all kinds of manipulations: transpositions, retrograde inversions, time scale alterations, velocity curves and quantisations to name but a few. All in all the sequencer carries out its job perfectly. Day in, day out, it accurately performs my most challenging compositions. The problem occurs when I wat my sequencer to improvise a solo of some kind. It’s not programmed to do that and it’s hardly surprising why not.

Improvisation can be likened to composing on the spot. Take the works of blues guitarist, Slim X. There are certain musical ‘items’ that X utilises more or less every time he plays a solo. Call them licks, runs, chops or noodles; the subtle use of a Wah-Wah pedal, the string bend that feels as though it covers two octaves, the pentatonic scale, the mixolydian mode, the string scratch, twelve-bar blues, picking, plucking, one way or another X manages to sneak them in. From a programmers point of view, the more detail
gathered about X’s blues technique the more feasible it becomes to construct a computer model. In theory you should be able to go as far as programming the computer to play as if its girlfriend has just walked out, but what is always missing from a computer model is the actuality of being human; a computer playing the blues is tantamount oxymoron. Surely it’s all a little pointless other than in the name of science? The computer is being very clever, but when there are thousands and thousands of people in the world doing the same thing, why bother getting a computer to? The primary reason for such research is the knowledge gained through the experience of designing such a devise.

One way of explaining why computers have difficulty acting human lies with the fact they are creatures of maths and maths can only describe part of the truth. The computer carries out all of its processing at high speed using electronic pulses of 0/off and 1/on. It is a not a continuous process but one divided into calculations that occur at a certain rate. A fast home computer functions at around 400Mhz and manufactures are building them faster each year. This partial truth representation also applies to the computers model of Slim X. As with all living things we usually perceive people, while they are alive, to be continuous within a linear timescale. Our lives appear to be infinitely detailed, not a series of ON’s and OFF’s. At the heart of all this is the comparison between our brains and computers. The human mind is certainly more mysterious but some neurological scientists and computer scientists believe that the difference may not be as great as you think…

With the parallel development of artificial intelligences and interactive genetic algorithms (IGA) to simulate the process of composition, future computers or their predecessors will certainly be able to write music. In fact, they’re not doing badly now. Developed at the Rochester Institute of Technology, New York, John A. Biles has created GenJam, a ‘genetic algorithm-based model of a novice jazz musician learning to improvise’. This IGA system requires the computer to learn what is good and bad. Throughout its soloing (at the beginning of the learning process solos have more in common with random melodies) a mentor, in the form of an audience to gain a good average, feeds back a responsive ‘g’ for good or ‘b’ for bad. The level of goodness and badness is derived by the amount of g’s and b’s sent to the computer. A really bad phrase might receive a string of b’s. Once the solo has finished the computer processes this information using a IGA. The genetic aspect comes in the form of a ‘survival of the fittest’ program, where the best solo elements are kept and the weakest binned. The computer then uses the best elements mixed with those created during previous solos to create another solo. As the process continues the machine, in theory, begins to learn how to improvise. Of course, the humanisation element obtained through the mentor is crucial to the improvisation because it is this subjectivity which decides whether or not GenJam is playing something musical. However, using a mentor also affects the virtue of the machine and the validity of the music it creates. Even if it could perform fantastic solos that sound extremely humanesque, who amongst us would pay to go and see it live in concert? Despite the genius behind it when all is said and done are we not left with a redundant showcase? I asked Al Biles, GenJam’s creator, over the Internet, how he considered the aesthetic value of the music created by GenJam. He replied:

“Its full-chorus solos are competent, if often uninspiring. Trading fours and eights with it is a lot of fun, particularly now that it evolves what it just heard me play into what it will play. Its spontaneity and responsiveness is actually better than most of the humans I’ve played with at jam sessions, and my conversations with GenJam are often more musical than those with humans. On the downside, GenJam’s visual impact is decidedly underwhelming, and the energy level is limited. All in all, though, GenJam is a viable soloist, and I enjoy playing gigs with it as much as with people.

“Most of the folks in the computer music community are trying to challenge traditional notions of what music is supposed to be. I want to challenge audiences technically but not musically. The music I hear is best characterized as straight up jazz, and I want what GenJam plays to be recognizable as such. I want to meet the audience’s musical expectations, at least to some extent, but I also want to challenge their expectations of what the technology can/cannot do. By choosing a musical genre that is accessible to many, I set GenJam up to be percieved as “not convincing” because the audience has an expectation of what jazz is supposed to sound like, and GenJam is trying to meet that expectation. Most “computer music” is inaccessible to most audiences because that music doesn’t meet any of their expectations. In other words, they simply can’t connect with it, or, to put it more negatively, they “don’t get it.” I want the general public to “get” GenJam.

“When we trade fours and eights, I immitate it on purpose – that’s really the point in chase choruses. I would do the same thing at a live jam session if I’m dueling with another horn player – develop what the other player just played, and shoot it back. I’ve always been pretty good at that (Claude “Fiddler” WIlliams once told me I had a “fast mind”). With GenJam, I’ve created something that can develop and play back what I play at least as well as I can. This leads to some very interesting conversations, much more stimulating than those I tend to have with other human soloists at jam sessions. In that sense I am more spontaneous with GenJam, but at another level I am much more constrained.

“The rhythm section I use is “canned” MIDI files generated with Band in a Box. The good news is that the drummer doesn’t rush and the piano voicings are very clear. The bad news is that I have to hit the holes I wrote for myself in the arrangement, and I have to play more inside. If I go outside harmonically or rhythmically with my human piano player, he’ll follow me and we’ll end up some place interesting. If I go outside with GenJam, I simply sound wrong because the rhythm section can’t hear me, much less follow. I think this has made me a better player in that I hit more changes than I used to, at least according to my piano player. On the other hand, GenJam gigs are way harder for me than “human” gigs because I’m playing most of the time. In a human gig, we’ll play long solos, so there is a lot of time for me to rest up while others solo. With GenJam, I take short solos and do lots of trading (fours and eights) in an effort to break things up and feature the interactivity. That puts more pressure on my chops, particularly if I write a demanding part for myself on the head.” – Al Biles 1/98

So there you have it; a computer that is more spontaneous and responsive than most musicians. It’s not a gimmick but the beginning of a new kind of musician. Like it or lump it, when GenJam’s of the future play you won’t be able to tell the difference between them and humans, except maybe in the visual sense but that’s something that has dogged electronic music since its beginning. So back to the top and the question, ‘Will computers every write music to compare to that written by humans’? The bottom line should always be if you like something then it’s good, if you don’t like something its shitty. Consider the year 2050, Macintosh (who have recovered from their 1990’s slump and taken over IBM) introduce their new application: Music Executives Desktop Composer V.1.1. This beast of a machine takes up the whole of a hand set, and through it the greedy record exec can produce pop hits at the touch of a button! Archive banks containing mathematically reduced previous number one hits are available across the information network formally known as the Internet. These can be randomly accessed to produce hybrid pop songs, in seconds. The general public doesn’t care, they just want something to dance to. The record companies don’t even need to worry about discovering new pop musicians! They cut their expenses by half and by bigger houses and faster cars. GenJam has it’s own WEB page at:

Back to Press Coverage