The Creative Lunacy of Dadabots’ Neural Networks

When you start to take a deep look at society and the way we live our lives, it’s easy to see how basic elements of everyday life – money, gender, religion and so much more – are societal constructs that could be so different had they developed differently. That same thought process can be applied to the music industry. The way in which music is created and released is more an established guideline than a genuine rule, and Boston’s Dadabots are here to offer an alternative to the status quo.

The duo, comprised of CJ Carr & Zack Zukowski, use modified SampleRNN architecture (yeah, I don’t know either) to generate alternative artists that take on the traits of existing music and churn out something similar. So far, they’ve taken on the likes of NOFX, Meshuggah and The Dillinger Escape Plan, but this time, they’ve turned their attention to mathcore in the form of PsyopusIdeas of Reference.

If you’ve ever seen those hilarious but probably faked “I forced a bot to watch 10,000 hours of The Big Bang Theory then write a script” style tweets, it’s essentially that in musical form. Only, it’s actually ridiculously impressive rather than comically bad. The potential for what this could mean for future music is pretty damn insane, especially when one considers the idea of allowing the AI to generate a mix of various different artists, as the men themselves explain far better than we ever could below. All we know is we’re happy to follow their modern sci-fi journey, and are thrilled to offer up a stream of the new album for your listening pleasure. Check out the band’s introduction to the album, followed by a stream of the music itself and an interview with Dadabots.

“Happy birthday Arpmandude, from Dadabots.

This is a parody of Ideas of Reference by Psyopus.

Music was generated autoregressively with SampleRNN, a recurrent neural network, trained on raw audio from the album Ideas of Reference. The machine listened to the album 29 times over several days. The machine generated 435 minutes of audio. A human listened to the machine audio, chose sections from varied evolution points, and taped them together into a ten-minute album.

Titles were generated by a Markov chain, except for track two. “e29 i79751″ means epoch 29, iteration 79751, which means it ran through the album 29 times, a total of 79751 short audio clips, before generating this track.

The album cover was human photoshopped. It’s a parody of the original cover. The crop circle appeared July 7th, 2018 in Martinsell Hill, England.

The vocals are just generated nonsense syllables. But it’s funny to come up with lyrics. Let us know if you do.”


Astral Noize

Indeed, the future is no longer a faraway land. AI is now a more comprehensible, everyday entity, with the now famous Sofia baiting Elon Musk and throwing shade on Twitter. It was only a matter of time before robots had a crack at the metal sphere, and so it is with Dadabots. We had a sit down to talk tech, neural networks, and the next level of music.
Astral Noize

Let’s start with the big question. Why have you done this?

Mischief.
Astral Noize

Music has persisted since the dawn of man and we’ve re-trodden the same paths many times. Do you think there’s another fundamentally different sort of music that man hasn’t encountered yet that independent machine thinking can reveal?

You nailed it. That’s really why we’re so curious about doing this.

Mike Patton predicted this back in 1992:

image (3).jpeg

image.jpeg

image (1).jpeg

We want to build futuristic spacemobiles so that artists can drive music off the planet into the latent space between all the music. Let’s go where no music has gone before.
Astral Noize

Given that similar approaches have been made to Beethoven and Miles Davis, what made you choose black metal and mathcore?

Short answer:

We love extreme music. I spent my youth learning Emperor and Dillinger Escape Plan songs on guitar. When Zack and I were working as interns at Berklee we would listen to Krallice all day.

Medium answer:

When were experimenting with genres, we actually got the best results with black metal and mathcore. Why? I think because of the noise and chaos. Noise masks the imperfections, and the chaos (a unique characteristic of unconditional neural synthesis) seems to actually contribute something to the aesthetic of those genres; pushes them somewhere they haven’t been yet.

Long answer:

Nobody was producing music with neural networks using *raw audio*.

A couple decades ago, David Cope’s EMI project was impersonating Beethoven using a symbolic representation of music. Generating sheet music, basically. But there’s only so far you can go with that technique. I can’t match the timbre of someone’s voice. I can’t match the production style of a metal album.

Some orchestra could play generated sheet music, I could say “we discovered this long-lost Beethoven piece!” and you might be fooled into thinking Beethoven composed it. But if I generated Psyopus tablature, got some band to play it, no one would think it’s a long-lost Psyopus song from their studio sessions that didn’t make the cut. That’s not Adam’s screams. Those aren’t the idiosyncrasies of Arpmandude’s guitar. That’s not Doug White’s uniquely nostalgic production style from that record.

Orchestral synthesisers are so good now I might get fooled into thinking a real orchestra played synthesised Beethoven. But what about a “Psyopus synthesiser”?

Why is it harder? Raw audio has 44100 time steps a second. These are huuuge sequences. Up to 10,000x larger than sheet music. We needed heavy-duty hardware only recently available to us plebes. Also, we needed cleverer algorithms. DeepMind and MILA paved the way by publishing their research into neural text-to-speech engines in 2016. We ran it with it and brought it to extreme music.
Astral Noize

How have the bands you’ve processed reacted to what the algorithm has done to their work?

“OMG THESE ARE INSANE” – Inzane Johnny, Crazy Doberman

And here’s UK beatbox champion Reeps One talking about his reaction. There’s a documentary coming out called “We Speak Music” where he performs a duet with his bot doppelganger in the anechoic chamber at Bell Labs. It’s dope.
Astral Noize

How do the results manifest when processing multiple or similar sources?

Our technique works best on an album’s worth of music with similar instrumentation and production style. It’s not very robust, it takes trial-and-error to get decent results. Good sounding fusions, that don’t sound like mush, have been challenging. We’re especially excited about nailing fusions though. Lots of bands have requested it. That’s when it’s going to get really really fun. Finally, we will hear Psyopus play some Skynyrd… oh god…
Astral Noize

The Bandcamp site states that the material is processed using the same parameters; hypothetically, how would giving the network more time to process the source material affect the outcome?

The trend is: it first learns short timescale patterns (a snare drum hit, the timbre of a scream), then longer (a sloppy guitar riff), then longer (a constant tempo). The more it trains, the more it makes longer timescale patterns. But there’s diminishing returns. Also the more it trains the more it memorises, so some of the most interesting sounds come from when it’s only half-trained.
Astral Noize

From the information online it seems that the sole human element is in the editing of the results. How close is the network to being able to perform that action?

Auto-curation is not on our radar. Takes the fun out of it. The fun is in making the final artistic choices. That said… it’s a challenging problem. If the net consistently made interesting music in real-time, we would do stranger things…
Astral Noize

The tagline on the Bandcamp page is ‘We write programs to develop artificial artists’; what’s the plan for the algorithm in the long run? Are you aiming to have the network create its own compositions in the vein of Shimon (the marimba-playing robot at Georgia Tech)?

Not exactly. Robotic projects like Captured! By Robots, Compressorhead, and Shimon are impressive. But our aim is human augmentation.

Few people write music, but almost everybody has a music aesthetic. Imagine a music production tool where you simply feed it music influences, like a Furby. It starts generating new music. You sculpt it to your aesthetic. Imagine hearing everyone’s crazy weird music aesthetic come out of their Furby.

Really this is just meta music – instead of playing the music, we are playing the musician.
Astral Noize

Which artists are you considering to push this process further?

We think prolific avant musicians like Mike Patton, who relentlessly take music where it hasn’t been, are right for the job. And artists like Drumcorps, Jennifer Walshe, Inzane Johnny, Igorrr, Venetian Snares, Zack Hill (Hella, Death Grips), Colin Marston, Mick Barr, Lightning Bolt, Oneohtrix Point Never, Daveed Diggs (clipping.), Yamantaka Eye, or anyone who’s played at The Stone — we want to give them artistic superweapons and see what fires out of their brains. But really if we can make it really accessible, there will be kids taking it places no one’s ever dreamed.

We wanna see Sander Dieleman (one of the inventors of wavenet, admin at got-djent) make neural metal. We all know he would make amazing neural metal.
Astral Noize

Words: John Tron Davidson, George Parr

Liked it? Take a second to support Astral Noize on Patreon!