Look into the past, embrace the future

April 10, 2019 | Know-how

Artificial Intelligence is a powerful term. It triggers both euphoria and dread – as many other technologies, no doubt, did so as well in the past. It has taken quite some time, creativity and vision to even get to the point, where musicians can record and edit their music in their bedrooms and share it via huge global networks. Not to mention, the open-mindedness and drive it has taken until millions of music lovers are now able to receive, for example, a weekly playlist featuring new music tailored to their personal tastes.

In retrospect

Quite often there seems to be a pattern when you look back on the technological advances in audio engineering and music production. When a new technology was first developed, it was initially reserved for those who could financially afford the personnel and expertise the technology required. The technology itself was often large, heavy and complex to construct as well as quite limited in its functionality (in retrospect).

But these emerging technologies opened the gateway to more affordable, flexible solutions. Hence, people with all their creativity and talents (and limited budgets) very soon had access to technologies that enabled them to create, edit and distribute music.

Take, for example, time-based effects and their development from the 1930s to the 1990s.

view of the reverberation chamber at IETR laboratory, Rennes.

Foto: courtesy of Manuamador (CC BY-SA 3.0)

The 30s

“Echo chambers” were built for broadcasting and recording purposes. They were long, rectangular spaces with sound-reflecting walls, a low ceiling, a speaker at one end and a microphone at the other.


 

The 40s

For the first time, a pop recording was enhanced with artificial reverb. The effect was created in the studio bathroom.

Les Paul added an extra playback head to his tape recorder to achieve a slapback echo and was the first to use true delay as an effect, independent from echo and reverb.


Echosonic by Ray Butts

Photo: EchoSonic, courtesy of Frank Roy

The 50s

Ray Butts built the EchoSonic; a portable combo guitar amplifier with a control for bass/treble, separate volume controls for mic and guitar and a built-in tape echo effect.

Multitrack recording was developed. It was now possible to add effects to selected instruments in a recording thereby enabling a flexibility that wasn’t there before.

EMT released the first plate reverb: the EMT 140 Reverberation Unit. It weighed a whopping 600 pounds.


 

The 60s

The Hammond Organ Company released the Accutronics Type 4 spring reverb which was smaller than a brief case. Leo Fender added the Type 4 to his famous Fender Twin Reverb.


EMT 250 Digital Reverb

Photo: EMT 250, courtesy of John Schimpf

The 70s

Integrated-circuit chips enabled the design of more sophisticated acoustic effects. Due to the BBD (bucket-brigade device) integrated-circuit, time-based modulation effects such as choruses, flangers and analog delays were developed.

The EMT 250 Electronic Reverberator Unit was invented. This floor-standing digital reverb was the first to offer multi-effects and it cost around 20,000 $.

Eventide Clock Works Inc. released the H910 Harmonizer. The first commercially available digital audio effects device combined pitch shifting, a short digital delay and feedback control.


 

The 80s

Groundbreaking digital delays were developed, such as the PCM 411 Digital Delay Processor by Lexicon and the SDD-3000 by Korg.


Photo: Cubase 1.0, courtesy of Steinberg

The 90s

Cubase VST by Steinberg was released for the Apple Macintosh. It offered the use of an EQ as well as multiple audio effects on a personal computer.


 

Digital sensation

In the 1970s, digital technologies began to emerge such as, for example, the first attempts at DAWs. This led to the introduction of the CD and MIDI a decade later. The ability to convert analog signals into digital signals was a game changer. Add to this the increasing processing power and speed of commercial computers and a new era dawned in the music industry.

The conversion from analog to digital signals is only the beginning in digital audio signal processing. After that the possibilities for audio reproduction and manipulation are limitless. Now things can be done that were unheard of before, such as: perfect time delay, linear phased filters in general or convolution reverb.

Combine and gain

The music industry didn’t invent the digitization of signals, but this technology has certainly triggered some amazing developments for audio engineers, music producers, and musicians – be they professional or amateur. It’s similar in Artificial Intelligence. By taking a technology like A.I. and applying it in an area different from what it was originally intended for, unknown possibilities open up.

For example, machine learning – a sub-discipline of A.I. – analyzes vast amounts of data, learns from this and delivers a decision or cluster. Combine machine learning with the tools of music information retrieval (MIR), a discipline that is dedicated to the retrieval of musical characteristics such as mood and tempo, and you get services like Spotify, Shazam and Soundcloud. With the mindblowing amount of music that is available nowadays it would be manually impossible to allocate each song to, for example, a specific genre.

More access, less grunt work

Artificial Intelligence is already a part of the world of audio mixing and mastering. Admittedly, not a huge part yet but nonetheless a rapidly growing one.

When it comes to creativity though – and making music is highly creative – there are concerns. What if the tools of A.I. overtake my creation? What if personal creative input vanishes and is replaced by A.I. generated muzak? It isn’t entirely unimaginable that similar questions were asked whenever a new technology found its way into a creative industry.

Here at sonible we think that Artificial Intelligence enriches the audio software market. Why? Because it allows aspiring talents access to a vast and rich audio technology. Not everybody has the opportunity to acquire considerable theoretical knowledge before they can create music. Regardless, they shouldn’t be denied the chance, nor the means to create. Professionals, however, are often confronted with the pressures of time. A.I. enhanced software can do the necessary grunt work for them; thereby leaving them more time for their creativity.

Human creativity is a powerful force and for truly creative people, technologies are merely tools to assist them in realizing their visions despite the limitations of money, opportunity or time.