We have visited our friend Roger Linn in his home in Los Altos, California, which is in the heart of Silicon Valley.
We spent a nice time together talking about the relation between Music and Technology.
Roger Linn is a designer of electronic music products.
He’s best known for the first digital drum machines that he made starting in 1979: we can mention, for example, the LM-1 , the LinnDrum, and the Linn-9000, that were used on lots of pop records in the 1980s by many artists, like Prince and Madonna, just to mention few of them.
Then, in the 90s, Roger also made the MPC Drum Machines with the company Akai, and those were used very heavily in hip-hop music production and other dance-related styles.
Then, in the 2000s Roger focused on making guitar products: the Adrenaline pedal. These products contributed to transforming electronic music applying to the Guitar World the ideas and tools that people have been using with keyboards or modular synthesizers.
One of the most recent projects is an instrument called Linnstrument, which attempts to bring the expressive performance control of acoustic instruments to electronic sound as an alternative to the MIDI keyboard which is just a bunch of on-off switches.
In your opinion, what’s the role of technology in relation with music?
Well, it’s a great enabler as it is in all other forms of society, in particular in music, MIDI has allowed a dramatic improvement because it’s allowed the separation of the human interface of the musical instrument from the sound generation.
Before MIDI, if you wanted a violin sound, you had to play the violin which is very hard! You’d have to hold up between your chin and your neck and get all kinds of muscle pains just to get a violin sound, and you’d have to spend years just to play it in tune!
If you wanted a saxophone sound you’d have to spend a few more years to learn where the notes are in a saxophone! In other words, the human interface was very much limited by the need for the instrument to make an acoustic sound.
MIDI changed all that 34/35 years ago because you could choose your human interface, usually, people use the standard MIDI piano keyboard, and then you can separately choose your sound.
Of course, the problem we found out is that MIDI keyboards on-off switches are not very well suited to creating the wonderful performance gestures of a violin, of a guitar, or a saxophone, and that’s what’s changing these studies now.
The main important thing is is that the technology has allowed us to do not only that separation of the human interface from the sound generation of musical instruments but also has allowed us to record information as you would on a tape recorder, things like change the sound of the performance after you’ve recorded it and many other things as we’re all aware of. So technology has been a wonderful boon for music in general and clearly, if you listen to music today it’s influenced just about everything in music.
What’s the relationship between SWAM and LinnStrument?
I would say that they’re tremendously important!
One thing that’s so interesting is that certain instrument sounds stand the test of time, and over the centuries we’ve learned that while many instruments died away, the sound of the violin, and the cello, and the saxophone, and the clarinet, and so many other instruments that in the Darwinism of musical instruments survived in the current orchestra or in the current band.
Guitar and Piano are others, this is because people have agreed these are great sounds. And so in synthesis, we were exploring new sounds, but sometimes we’re forgetting that there are sounds that have wonderful qualities and by the way, we’ve all heard the violin and performances for many years and when we hear it again whether it’s a real violin or a SWAM instrument there’s an emotion that happens.
Not only is it a good sound that has survived the test of time but we’ve heard so many beautiful performances with it.
So all that power is packed into each SWAM instrument, and when I play the smaller instruments with my instrument and I’m able to do those same violin gestures like vibrato, or portamento, or glissando, it raises emotion in me that I could not have because I don’t know how to play the violin.
So I think the SWAM instruments are tremendously important, and nobody else has done what Audio Modeling is done it’s so much better for expressive control than samples because samples are just snapshots but it is so wonderful to get a sound that is better than any sampling instrument but to have it be so malleable and to be able to change timbre, and expression, and pitch, and all be so accurate. So I think it’s just wonderful and SWAM instruments have really shown the power of the instrument to my users and so many of them play this one instrument so I have nothing but gratitude for Audio Modeling.
How about the near future in technology and musical instruments?
Well, one thing I’m seeing which is very gratifying is the increasing popularity of expressive instruments like mine Linnstrument, or ROLI Seaboard, or other instruments… and it is so wonderful to see this because I’ve missed expressive solos in popular music.
I’ve missed expressive solo performances in music for pictures, in music for dance. It seems like this because everyone’s playing music with on-off switches, which is what a MIDI keyboard is. It seems that music has gotten more boring.
And most importantly, if you listen to all electronic music these days it’s being used for the background. Music background for singers and pop songs, background for a picture in music, for picture background for dance, in electronic dance music…
I think what is so exciting is to see this adoption of expressive instruments and expressive sound generators, like the SWAM instruments, and so what I see, also what I’d love to see on top of that the SWAM instruments are wonderful for perfectly emulating the original orchestral instrument and there’s nothing better than nothing that comes close what I’d love to see is because Audio Modeling’s SWAM technology is so incredible!
I’d love to see a physical modeling synthesizer come out of the Audio Modeling Company that would use your technology but for creating new sounds that have a hint of a violin or a hint of a saxophone but allow the user to take it and create a new instrument.
Maybe a hybrid between a woodwind and a bowed string, or something that has a hybrid between a violin and an analog synthesizer sound, or something like that.
But allow someone to create their own sounds but not be so limited by the standard oscillator filter model to be able to break that you know and physical
modeling.
I think no one’s created a good human interface an user experience for that and I think the Audio Modeling Company is perfectly positioned to do that!
Do you think that AI could have a role in music in the future?
It’s hard to say, I think there are some possible cases where machine learning could be used to help, perhaps the user experience, designing the user experience, but at the same time, we’re in a unique position in creating electronic music instruments, because we’re looking for instruments that enhance the experience of someone expressing human creativity, and so in many other areas of the AI we were looking to replace the human but that’s different with musical instruments. We’re looking to augment the uniquely human skills.
So it’s hard to say how all machine learning would help in that regard but I’m sure I will be surprised because I’ve been surprised before!