Adam Jay on building live techno sets on Elektron gear – and why you should stay punk

The system has failed us, but not Adam Jay. He’s here to show us how he rigs up his latest live techno sets. And he can make 3 tiny waveforms on a $300 Elektron make you want to dance.

There’s some fantastic music here, so feel free to sit back or get up and let your smartphone’s step counter know you’re still very much alive. But if you’re wondering how anyone plays like this live, he talks us through his setup.

And yeah, if you need any added motivation to work on your chops as stay-at-home producer in isolation, this is like a free cross-training decathlon intensive master class. It’s not about amassing a lot of gear – what the Indianapolis-based artist with a deep Detroit soul has amassed is a ton of skill.

A single-cycle exercise

“The System Has Failed Us” is a live techno track that channels “frustration with our current global predicament”:

…how our leaders have failed us and how we must work together to overcome the reckless choices made by those who have abused their power. I could not sleep last night and had to get this out of my system.

The track was a way to exorcise frustration, but also served as an exercise in minimalism:

[It’s] all single cycle samples. Trying to find out how far I could push the machine with the minimum amount of source material – and it’s only three separate single cycle sample .wavs at that, using them across 6 tracks on the Model Samples.

The rig:

  • Elektron Model:Samples, with 3 single-cycle samples (556 bytes in length!), across six tracks
  • “Heavy” LFO modulation for the kick and bass and hat (so you get them out of the same waveforms)
  • Model:Samples output hits an Alesis Micro Limiter and some light Octatrack effects (EQ/Compressor)
  • Midi Fighter Twister controller controls a bass equalizer (also hosted on the Octatrack).

In a nice instance of Elektron sonic recycling, those 556 bytes x 3 were originally produced by an Elektron Digitone (kick, bass) and Analog Four (hat). The samples were created by Taro, and you can grab them for yourself – they’re free:

https://freewavetables.glideapp.io

https://www.elektronauts.com/t/free-wavetables/121639

A full-length set – and how it’s structured

That’s one track, but here’s an expanded set.

I was really curious about how he puts the pieces together. So Adam details the setup for CDM. The basic idea here is to play the Model:Samples as the main sound source, but use additional Alesis hardware and some clever performance routings on the Octatrack for dynamics processing and (on the Octatrack) messing about with re-sampling loops and adding effects.

The Octatrack is the performance command station, both with additional loops, effects, routing the Model:Samples, and additional control via the MIDI Fighter Twister for hands-on encoder moves. The ingredients:

  • Elektron Octatrack is MIDI clock host, sending clock to the Model:Samples
  • Elektron Model:Samples is the sound source for “all the material”
  • Model:Samples signal chain: Alesis MicroLimiter > Octatrack AB input
  • Octatrack track 3 is a THRU track (Model:Samples with Compressor and EQ in the two effects slots)
  • Track 7 is a FLEX track, “recording/looping/mangling the T3/Model:Samples audio.
  • Track 5 is another FLEX track “with just some other very short loops previously recorded, made on the Model:Samples with heavy EQ filtering and Dark Reverb in the the two FX slots. Re-sequenced on Track 5 on the fly, as needed.”
  • Track 8 Master “has a dark reverb that I tweak during some of the dubbier bits.”

And then there’s control: “The Midi Fighter Twister controller goes through a USB MIDI host box to convert USB-A to 5 PIN DIN MIDI. The Twister controls Octatrack levels, EQs, reverb sends, allowing me to creatively mix between the thru and flex tracks, without any paging around on the Octatrack.”

Now obviously, keeping these tunes together means there’s some pre-programming – but then it’s about the ability to mess with it, thanks to the routing above. He explains:

Ultimately, each tune is a single Model Samples pattern, tweaked and freaked live. And the Octatrack is there to loop it, effect it, and mix the live-looped Model:Samples for transitions.

The conceptual approach is to use the 6-track limitation as an advantage and make sure each sound is a good fit, since there are so few tracks to work with — and to set up the patterns so they can be played live in interesting ways that keep moving and stay dance-y.

Hooks are heavily filter-modulated and the Model:Sample’s Pioneer DJM-style low-pass/high-pass filter is very beneficial in this regard. They often come from small recorded Analog Four synth phrases that have some motion in them already, modulating start point and/or filter brings them to life. Bass lines are often the same samples as the kicks, with the start point shaved to take off the attack, and then pitch/distortion/filter to get them grooving. The latch-able FILL mode often works as seventh track, mostly for pattern variation, as each tune is only a single pattern.

Cramming in as much dance-able content into each pattern was the key to keeping it interesting. The Octatrack just adds a bit of trickery-flair and keeps the transitions seamless. I was a DJ first, so my live sets have always had that mixed element to them.

Keep techno punk

Oh yeah, and Adam has a message for you: stay punk. Play cheap.

Doing all the creation on the Model:Samples is also a big middle finger to those who like to poo-poo low cost instruments to make themselves feel better about their $3,000 synthesizer expenditures – the people who call instruments without a long list of features “cheap plastic toys” and never add anything of substance to the conversation.

Techno should be more punk, more visceral, and more pushing what you have to the limits. Some of the most inspiring stuff I’ve ever heard in my life came from an old friend on his Roland R-8 [drum machine] through an AIWA boombox. I’m all for Elektron and Korg and Roland and Novation pushing out inspiring, capable instruments to the masses. Everyone should have the option to be able to express themselves and get their message across, no matter what their budget is.

This narrative that now that Elektron is more appealing, and more affordable to people who can’t afford the Digi or big boxes… that the “glory days are over”? Oh man, I couldn’t disagree more. The most creative, brilliant, and under-served people I know are the ones who can only afford the $299 instrument. Even before the pandemic, they were struggling, disadvantaged, living life day to day, check to check, working multiple jobs. They are no less deserving of quality tools to express themselves.

I would argue that creating this lower entry point to far more people is when the glory days actually begin. Far more music will be made on these boxes by a greater number of people. And that number will include more young people, and more disadvantaged people than before. This excites me the most. Their voices are equally valid and should be equally valued. If that reach, that influence on the populace is less “glorious” than a metal case with more LFOs, then I think some have lost the plot.

Music is here to connect humans together. The connection I have with someone else I do not know, when I hear and enjoy their music… it’s like nothing else in the world. Why on Earth would anyone want to keep the gates up on that? Why would anyone want to wall themselves in with only the people who can afford more expensive tools?

For some musical evidence of that, Adam has pulled off not one but three exceptional, forward-thinking electro albums on Detroit Underground, including this year’s terrific Inoperable Data (a title that kind of sums up our brains right now, too).

Have a listen. No further witnesses; the defense rests.

You might want to have a look at that one, too, as there are videos for every single track:

https://detund.bandcamp.com/album/maxia-zeta

And for still more Adam Jay action, check the mastering credits for the likes of Mike Parker, Noncompliant, Daniel Troberg, and Kero. (To butcher the 1980s BASF ad, Adam didn’t create some of the music you hear. He’s made some of the music you hear bang harder.)

Thanks, Adam, we may be checking in with you routinely in these strange times!

The post Adam Jay on building live techno sets on Elektron gear – and why you should stay punk appeared first on CDM Create Digital Music.

AI upscaling makes this Lumiere Bros film look new – and you can use the same technique

A.I.! Good gawd y’all – what is it good for? Absolutely … upscaling, actually. Some of machine learning’s powers may prove to be simple but transformative.

And in fact, this “enhance” feature we always imagined from sci-fi becomes real. Just watch as a pioneering Lumiere Brothers film is transformed so it seems like something shot with money from the Polish government and screened at a big arty film festival, not 1896. It’s spooky.

It’s the work of Denis Shiryaev. (If you speak Russian, you can also follow his Telegram channel.) Here’s the original source, which isn’t necessarily even a perfect archive:

It’s easy to see the possibilities here – this is a dream both for archivists and people wanting to economically and creatively push the boundaries of high-framerate and slow-motion footage. What’s remarkable is that there’s a workflow here you might use on your own computer.

And while there are legitimate fears of AI in black boxes controlled by states and large corporations, here the results are either open source or available commercially. There are two tools here.

Enlarging photos and videos is the work of a commercial tool, which promises 600% scaling improvements “while preserving image quality.”

It’s US$99.99, which seems well worth it for the quality payoff. (More for commercial licenses. There’s also a free trial available.) Uniquely, that tool also is optimized for Intel Core processors with Iris Plus, so you don’t need to fire up a specific GPU like the NVIDIA. They don’t say a lot about how it works, other than it’s a deep learning neural network.

We can guess, though. The trick is that machine learning trains on existing data of high-res images to allow mathematical prediction on lower-resolution images. There’s been copious documentation of AI-powered upscaling, and why it works mathematically better than traditional interpolation algorithms. (This video is an example.) Many of those used GANs (generative adverserial networks), though, and I think it’s a safe bet that Gigapixel is closer to this (also slightly implied by the language Gigapixel uses):

Deep learning based super resolution, without using a GAN [Towards data science]

Some more expert data scientists may be able to fill in details, but at least that article would get you started if you’re curious to roll your own solution for a custom solution. (Unless you’re handy with Intel optimization, it’s worth the hundred bucks, but for those of you who are advanced coders and data scientists, knock yourself out.)

The quality of motion may be just as important, and that side of this example is free. To increase the framerate, they employ a technique developed by an academic-private partnership (Google, University of California Merced, and Shanghai’s Jiao Tong University):

Depth-Aware Video Frame Interpolation

Short version – you combine some good old-fashioned optical flow prediction together with convolutional neural networks, and then use a depth map so that big objects moving through the frame don’t totally screw up the processing.

Result – freakin’ awesome slow mo go karts, that’s what! Go, math!

This also illustrates that automation isn’t necessarily the enemy. Remember watching huge lists of low-wage animators scroll past at the end of movies? That might well be something you want to automate (in-betweening) in favor of more-skilled design. Watch this:

A lot of the public misperception of AI is that it will make the animated movie, because technology is “always getting better” (which rather confuses Moore’s Law and the human brain – not related). It may be more accurate to say that these processes will excel at pushing the boundaries of some of our tech (like CCD sensors, which eventually run into the laws of physics). And they may well automate processes that were rote work to begin with, like in-betweening frames of animation, which is a tedious task that was already getting pushed to cheap labor markets.

I don’t want to wade into that, necessarily – animation isn’t my field, let alone labor practices. But suffice to say even a quick Google search will quickly come up with stories like this article on Filipino animators and low wages and poor conditions. Of course, the bad news is, just as those workers collectivize, AI could automate their job away entirely. But it might also mean a Filipino animation company would face a level playing field using this software with the companies that once hired them, only now with the ability to do actual creative work.

Anyway, that’s only animation; you can’t outsource your crappy video and photos, so it’s a moot point there.

Another common misconception – perhaps one even shared by some sloppy programmers – is that processes improve the more computational resources you throw at them. That’s not necessarily the case – objectively even not always the case. In any event, the fact that these work now, and in ways that are pleasing to the eye, means you don’t have to mess with ill-informed hypothetical futures.

I spotted this on the VJ Union Facebook group, where Sean Caruso suggests this workflow: since you can only use Topaz on sequences of images, you can import into After Effects and go on and use Twixtor Pro to double framerate, too. Of course, coders and people handy with tools like ffmpeg won’t need the Adobe subscription. (ffmpeg, not so much? There’s a CDM story for that, with some useful comment thread, too.)

Having blabbered on like this, I’m sure someone can now say something more intelligent or something I’ve missed – which I would welcome, fire away!

Now if you’ll excuse me, I want to escape to that 1896 train platform again. Ahhhh…

The post AI upscaling makes this Lumiere Bros film look new – and you can use the same technique appeared first on CDM Create Digital Music.

Watch My Panda Shall Fly play KORG volcas with bits of metal

“Play your KORG volcas with bits of metal instead of your fingers” isn’t one of the Oblique Strategies, but maybe it ought to be.

Sometimes all you need for some musical inspiration is a different approach. So My Panda Shall Fly took a different angle for a session for music video series Homework. Since the volca series use conductive touch for input, a set of metal objects (like coins) will trigger the inputs. Result: some unstable sounds.

I mean, maybe it’s just all part of an influencer campaign for Big Coin, but you never know.

My Panda Shall Fly is a London based producer covering a wide range of bases:

And he’s done some modular loops. We’ve seen him in these here parts before, too:

Artists share Novation Circuit tips, with Shawn Rudiman and My Panda Shall Fly

The post Watch My Panda Shall Fly play KORG volcas with bits of metal appeared first on CDM Create Digital Music.

What you can learn from Belief Defect’s modular-PC live rig

Belief Defect’s dark, grungy, distorted sounds come from hardware modulars in tandem with Reaktor and Maschine. Here’s how the Raster artists make it work.

Belief Defect is a duo from two known techno artists, minus their usual identities, with a full-length out on Raster (the label formerly known as Raster-Noton). It digresses from techno into aggressively crunchy left-field sonic tableau and gothic song constructions. There are some video excerpts from their stunning live debut at Berlin’s Atonal Festival, featuring visuals by OKTAform:

See also: STREAM BELIEF DEFECT’S DECADENT YET DEPRAVED ALBUM AND READ THE STORIES BEHIND THEIR CREEPY SAMPLES

They’ve got analog modulars in the studio and onstage, but a whole lot of the live set’s sounds emanate from computers – and the computer pulls the live show together. That’s no less expressive or performative – on the contrary, the combination with Maschine hardware means easy access to playing percussion live and controlling parameters.

Native Instruments asked me to do an in-depth interview for the new NI Blog, to get to talk about their music. The full interview:

Belief Defect on their Maschine and Reaktor modular rig [blog.native-instruments.com]

They’ve got a diverse setup: modular gear across two studios, Bitwig Studio running some stems (and useful in the studio for interfacing with modulars), a Nord Drum connected via MIDI, and then one laptop running Maschine and Reaktor that ties it all together.

Here are some tips picked up from that interview and reviewing the Reaktor patch at the heart of their album and live rig:

1. Embrace your Dr. Frankenstein.

Patching together something from existing stuff to get what you want can give you a tool that gets used and reused. In this case, Belief Defect used some familiar Reaktor ensemble bits to produce their versatile drum kit and effects combo.

2. Saturator love.

Don’t overlook the simple. A lot of the sound of Belief Defect is clever, economical use of the distinctive sound of delay, reverb, filter, and distortion. The distortion, for instance, is the sound of Reaktor’s built-in Saturator 2 module, which is routed after the filter. I suspect that’s not accidental – by not overcomplicating layers of effects, it frees up the artists to use their ears, focus on their source material, and dial in just the sound they want.

And remember if you’re playing with the excellent Reaktor Blocks, you can always modify a module using these tried-and-true bits and pieces from the Reaktor library.

For more saturation, check out the free download they recommend, which you can drop into your Blocks modular rig, too:

ThatOneKnob Compressor [Reaktor User Library]

3. Check out Molekular for vocals.

Also included with Reaktor 6, Molekular is its own modular multi-effects environment. Belief Defect used it on vocals via the harmonic quantizer. And it’s “free” once you have Reaktor – waiting to be used, or even picked apart.

“Using the harmonic quantizer, and then going crazy and have everything not drift into gibberish was just amazing.”

Maschine clips in the upper left trigger snapshots in Reaktor – simple, effective,

4. Maschine can act as a controller and snapshot recall for Reaktor.

One challenge I suspect for some Reaktor users is, whereas your patching and sound design process is initially all about the mouse and computer, when you play you want to get tangible. Here, Belief Defect have used Reaktor inside Maschine. Then the Maschine pads trigger drum sounds, and the encoders control parameters.

Group A on Maschine houses the Reaktor ensemble. Macro controls are mapped consistently, so that turning the third encoder always has the same result. Then Reaktor snapshots are triggered from clips, so that each track can have presets ready to go.

This is so significant, in fact, that I’ll be looking at this in some future tutorials. (Reaktor also pairs nicely with Ableton Push in the same way; I’ve done that live with Reaktor Blocks rigs. Since what you lose going virtual is hands-on control, this gets it back – and handles that preset recall that analog modulars, cough, don’t exactly do.)

5. Maschine can also act as a bridge to hardware.

On a separate group, Belief Defect control their Nord Drum – this time using MIDI CC messages mapped to encoders. That group is color-coded Nord red (cute).

Belief Defect, the duo, in disguise. (You… might recognize them in the video, if you know them.)

6. Build a committed relationship.

Well, with an instrument, that is. By practicing with that one Reaktor ensemble, they built a coherent sound, tied the album together, and then had room to play – live and in the studio – by really making it an instrument and an extension of themselves. The drum sounds they point out lasted ten years. On the hardware side, there’s a parallel – like talking about taking their Buchla Music Easel out to work on.

Check out the full interview:

Belief Defect on their Maschine and Reaktor modular rig [blog.native-instruments.com]

Whoa.

Follow Belief Defect on Twitter:
https://twitter.com/Belief_Defect

and Instagram:
https://www.instagram.com/belief_defect/

Reaktor 6

Reaktor User Library

Photo credits: Giovanni Dominice.

The post What you can learn from Belief Defect’s modular-PC live rig appeared first on CDM Create Digital Music.

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

You’ve sampled. You’ve sliced. You’ve warped. So what’s left to do with loops? Accusonus have turned to machine learning for a new answer.

Software for years has been able to apply rhythmic analysis (like looking for transients or guessing at tempo), and frequency analysis (filtering by band). The more recent development involves training algorithms with big data sets using machine learning. That’s commonly called “A.I.,” though of course artificial intelligence makes most of us scifi fans start to think killer robots and Agent Smith and the like – and this isn’t really anything to do with that. Behind the flashy names, what you’re really dealing with is some heavy-duty mathematics. The “machine learning” element means the software that has been trained on pre-existing materials to give you results that are less brute-force, and more what you’d expect musically.

What is exciting about that is the results. With Regroover, what you get is a tool that analyzes audio into “layers” instead of just transients, slices, and bands. And now, it supports drag and drop into and out of the tool. So individual sounds and layers can now be dragged to your host, to an arrangement, or to a sampler – anything that also has drag-and-drop support.

Add Regroover to Ableton Live, for instance, and it’s a bit like having a new way to process sounds, on top of the warping techniques you’ve had for a few years. Instead of working with the whole stereo loop at once, you now are presented with various layers – which might separate out a melodic part, or even get as precise as specific pieces of percussion. It’s using time and frequency and that machine learning all at once.

Regroover joins a handful of tools providing this sort of “unmixing” capability, with a particular focus on percussive loops. If you didn’t get exactly the isolation you wanted, you can then adjust the density of the layers and run the algorithm again. Or for additional precision, you can select a portion and split the layers based on particular material.

Sometimes the “mistakes” are as interesting as the results you’re looking for: you get the chance to unearth portions of a loop you may not have even heard before.

Around this layers interface, the developers have wrapped various tools for mixing, processing, and slicing up the resulting materials. You’re given an interface that lets you then adjust the level and panning (both mid/side and left/right) of each layer, which lets you emphasize or de-emphasize parts of the loop. And you can route layers to effects, either in Regroover or by sending to external buses to your host.

You can just stop there, or you can take portions of a clip – individual layers, bits of time – and divide them up into pads. There’s a built in drum pad sampler, but now with version 1.7, you can also drag and drop out to your host. In Live or Maschine, to give two Berlin software examples, that means you can then use your favorite sampling tools to work with further.

This could mean everything from minor surgery on a clip to isolating individual parts of the groove or even individual percussion parts.

Sometimes, the simple tricks Regroover can pull are actually the most appealing. So while you could do some fancy sampling or kick drum replacement (takes one minute) or something like that, you can also just mess with polyrhythms inside a loop by dividing into layers, and changing length:

Production guru Thavius Beck has a great tutorial explaining the whole thing from a creative standpoint:

I’ve been playing with Regroover for a few weeks. It definitely takes a little getting into, because it is different – and you’re hearing different results than you would with other tools. Yes, there are other remixing and unmixing tools out there, too – and this isn’t quite that. It’s really geared for percussion and loops specifically, and the interface makes it a kind of AI-mad sampling drum machine loop re-processor.

The most important expectation to adjust is, this won’t sound quite like what you’ve heard before. Remember when you first played with warping in a tool like Live, ReCycle, or Acid? (Old timers, anyone?) It has that feeling.

There are some mathematical and perceptual realities of sound that you’re going to hit up against. You’re pulling out elements of a single audio file, which means because your ears are sensitive, you’ll start to hear the sound as less natural as you process it. The quality of the source material will matter – to the point that Accusonus are even producing their own libraries. On the other hand, that opens up some new possibilities. For one, some of the digital-sounding timbres that result have aesthetic potential all their own.

Or, you can look at this as a way not to just extract sound itself, but groove – because the results are very precise about rhythmic elements inside a loop.

CDM are teaming up with Accusonus to demonstrate how this works and give you some tips, so we’ll check in again with that.

As I see it, you get a few major use cases.

People who want to mess with loop libraries. If you’ve got loops that are stereo files, this lets you modify them in ways subtle or radical and make them your own – a bit more like what you can do with MIDI patterns.

A remix tool. Well, obviously. This gets really interesting, though, from a number of angles. There are some new options when someone says “oops, sorry, I have the stereo mix and no stems.” There are new ways of treating the stems you have. And there are new ways of treating additional materials outside the mix. (All of this holds whether it’s your music or someone else’s.)

A way to process your own materials. I’m fond of quoting something I overheard about French cooking once – that the kitchen was all about doing something to an ingredient, then doing something else. So if you’re in the middle of a project and want to take some of the material a different direction, this is a new way of doing that. And I think in electronic music, where we’re constantly getting away from the obvious solution, that’s compelling.

A groove extraction tool. Frankly, this works a whole lot better than the groove tools in conventional DAWs, because you can pull out elements of a loop, then use that either as a trigger or work with the audio directly.

An “alternative” sampling drum machine. Since you can pull out individual bits, you can make new drum kits out of sounds. And that includes —

Creative abuse. Regroover is really designed for drum loops – both in the interface and the way in which the machine learning algorithms were trained and adapted. But that doesn’t mean you have to follow the rules. Dropping any AIFF or WAV file will work, so you can take field recordings or whatever you can get your hands on and see what happens. There are some strange perceptions you may have of the results, but that’s the fun.

Next week, we’ll have a tutorial and a special giveaway so you can give this a try.

Regroover is available as a free trial, a US$99 Essentials version, or a $219 Pro version.

Here’s what’s new in 1.7:

A complete set of tutorials is available:

Product site:

Accusonus Regroover

The post Regroover is the AI-powered loop unmixer, now with drag-and-drop clips appeared first on CDM Create Digital Music.

Get your notes right on Novation Circuit … what tutorials do you want?

Novation last week released a new set of tutorials for their Circuit. These cover scales, melodies, and chords. Those are interesting not just for those with limited skills on other instruments, but also, ironically, as a way to get away from your usual habits if you are used to something like a piano.

The tutorials are great, but this raises a question. Which tutorials would you most want to see – what topics, and what hardware?

There’s of course way more gear out there than you could ever reasonably cover. But while some material applies to everything (music theory, for instance, or the principles of mixing), some technique really is specific to particular hardware.

Reader stats on the articles we’ve had on Circuit tell me that you agree with us – this is a sort of “people’s drum machine,” thanks to its simplicity, low cost, and a steady stream of updates. (For the latter, firmware updates I imagine will soon hit the limits of the hardware, though users should continue to make interesting sounds and so on.)

Now, Novation are lagging a bit – documentation is only just complete on the Novation Circuit Bass Station, which is by some measures more complex than the original Circuit. (At least that’s true considering what’s available on the hardware itself, before you get into Circuit’s editor.)

We could do some research / survey on this, of course, but prior to that I’m curious to open this up to discussion.

Oh, and let us know how you’re working with the Circuit, as I know a lot of you are making fairly heavy use of it.

Back to the human side of this, it’s worth revisiting this film CDM co-produced with Novation. Shawn, I want to hear what you’re up to these days with the Circuit (and everything else).

I love Shawn’s idea of “a lot of Jedis.” That’s why it’s actually exciting that more people are developing chops – and a reason to do good tutorials and share knowledge. A golden age of Jedis would be great for music. No one would ever say, “I wish there weren’t so many Jedis – it’d be better if there were fewer.”

The post Get your notes right on Novation Circuit … what tutorials do you want? appeared first on CDM Create Digital Music.

Andrew knows how to YouTube, makes fidget spinner music

It happens. You get older. Slower. You wake up one day, and you’re definitely not a YouTube star with your own Patreon account and free sound pack downloads to go with it. You didn’t even figure out that there was a big trend involving something called fidget toys, “spinners” and “cubes” that kids use to … fidget … with. And already that trend is big enough that someone is making music with them.

This story might be about me. It might be about you. But it’s okay – because Andrew Huang is there. His followers are telling him about the fidget toys. He’s turning that trend into sweet, sweet music.

You can fake it, too. You can download his Ableton pack, and show it off to your friends, then roll your eyes in disgust when they say they’ve no idea what any of this stuff is – as if. Youth restored.

Or you can pick up some tips. (Basically, use some EQ to filter out pitched sound from noise, use Sampler/Simpler in Ableton or something similar to play these up and down the keyboard. Now, this is 80s sampler stuff. I even was there for the 80s. Advantage: gen X and above.) And maybe you’re on top of the next Internet meme. Better watch closely, though – don’t flinch just because the President of the United States is tweeting. Stay on your game.

Or just sit back with a cool drink and watch the YouTube. Life is good. We live in the future.

CDM will let you know, like, eventually. We did tell you about the aliens and unicorns.

Andrew’s channel:
https://www.youtube.com/channel/UCdcemy56JtVTrsFIOoqvV8g

The post Andrew knows how to YouTube, makes fidget spinner music appeared first on CDM Create Digital Music.

How to ditch the computer and use Octatrack for backing tracks

Elektron’s Octatrack has been around since 2010, with Digitakt about to make its launch. But it remains a bedrock of a lot of live rigs. And there’s something that’s still special about it. It’s a sampler, yes, but with eight tracks and a built-in sequencer. It’s got a deep effects section and loads of I/O. In other words, it’s a digital box that assumes a lot of the collection of functions that are the reason to lug along a laptop. It does that job of playing tracks, sequences, and effects in an improvisatory way – whether closer to live playing or DJing.

The trick is understanding how to do that. And while loading up tracks and pressing play may sound boring, that could free you up to actually experiment with effects and transitions over top rather than just the busywork of reinventing the same material. (That’s especially important if you want to play the stuff people expect from your record.)

Cuckoo continues his terrific series of video tutorial videos with a comprehensive starter’s guide to doing just that.

The first few minutes are just the basics – the backing track bit. But about nine minutes in, you start to get to the interesting stuff. That includes making the whole setup playable, using effects like beat repeat creatively, and employing the Octatrack’s unique onboard crossfader as “scene slider.”

Of course, the other advantage of automating some of this stuff is that it allows the Octatrack to be effective at the center of a rig with other gear – or even that computer, in fact.

Have a look:

If you want more, he’s got a whole series of videos on how to use the Octatrack – and some live jams of his own. It winds up being somehow better than even Elektron’s documentation – but I think it will always be important to have tutorial content from artists’ perspectives.

Great stuff – thanks!

The post How to ditch the computer and use Octatrack for backing tracks appeared first on CDM Create Digital Music.

Watch a half hour documentary on the sound of Star Wars Rogue One

For lovers of sound design – cinematic or otherwise – Star Wars is always good reason to nerd out. But Rogue One is something different, as the first film to be a standalone or spinoff. On the music side, it meant a new composer who wasn’t John Williams (Michael Giacchino). But perhaps the less known story is that sound, too, got a new direction.

Filling the shoes of Ben Burtt is no easy task. There’s probably no Hollywood sound creator better known than Burtt. And as with any Star Wars film, you have the unique challenge of trying to do foley work for things that don’t exist in the real world.

But here’s where Star Wars has given us a legacy. Even though computer tech gives you the theoretical ability to produce any sound you can imagine, that doesn’t mean it’s the easiest or most artistically satisfying route to making a sound. And the unique talent of Skywalker Sound for finding sounds in the real world is one that can impact just about anyone working in sound – whether you’re imagining a scifi robot or just an interesting drum kit.

All of this means that Sound on Sound have given us a terrific watch. They spend half an hour speaking to the men and women who gave us Rogue One sound.

So what you get are details like how to do foley for Stormtroopers and how a combination of real/processed, recorded/digitally modified sound of a door gives you a droid.

“Believability” is an interesting quality I think to musical sound, too – so in music, making something “gritty” and real on one hand, or imaginary and fanciful and even impossible on the other, gives you a spectrum of ways of playing with the mind’s perception and memory.

Also, I think it bears saying that Sound on Sound generally remains a pillar of sound recording and music technology journalism. It’s simply terrific that they’re going out and doing this, and not just product vendors. Kudos to one of the better outlets in the business. Oh yeah, and you know we’re totally jealous y’all got to go do this! (And it’s relevant to electronic music, too, as I know this film inspired a lot of us, as this dominated my social media chats and feeds over Christmastime!)

Description:

Rogue One’s gritty, war film aesthetic is a real departure for the Star Wars franchise, and this realism is born out in the sound design. We visited the iconic Skywalker Sound to talk to Supervising Sound Editors Christopher Scarabosio and Matthew Wood about world creation, making sci-fi weapons sound realistic, and how some unexpected sampling brought K-2SO — the film’s sardonic and loveable droid — to life!

We also discover the secrets of Foley artist Ronni Brown and Foley Mixer Frank Aglieri-Rinella who reveal how they made the Rogue One stormtroopers sound more menacing.

To hear the final effect of all the techniques described in this feature, you can order Rogue One now in Digital HD or Blu-Ray/DVD: http://www.starwars.com/rogue-one

The post Watch a half hour documentary on the sound of Star Wars Rogue One appeared first on CDM Create Digital Music.

Hear the epic live set Skinnerbox played at Fusion Festival

We have the technology. We have the capability to play live sets on mainstages. And for a brilliant example of that, look no further than the frenetic, exquisitely hyperactive acid performances of Skinnerbox. Their set at Fusion Festival from this weekend demonstrates that you can command massive mobs of dance lovers outdoors with live sets, too. And maybe you thought such things were confined to chin-scratching handfuls of nerds.

Skinnerbox is the Berlin-based duo of Iftah Gabbai and Olaf Hilgenfeld, who join together to make sample-laden live performances mixing acid techno spiced up with grooves. Last week, they dazzled the outdoor throngs at Fusion’s legendary Turmbühne, the Mad Max-styled open air megaplex.

Fusion Festival’s organizers actually explicitly discourage documentation. The event, a kind of extended afterhours open air sprawling over a Soviet airfield, is best remembered like a dream anyway. But I think it is important to share the musical artists from that event. They span seemingly endless stages, from enormous openair arenas with set pieces and special effects to intimate tents and club-like indoor spaces.

And it’s important in particular to appreciate what happens when live sets do hit the bigger stages, which even at Fusion are awash with mostly CDJ sets. Live performance of dance tracks continues to be a comparative minority. And on big stages, the throngs may not know that what is producing what they’re hearing (being occupied instead with dancing and partying, natch). So spreading that information separately is a reasonable solution.

Indeed, the possibilities of live music are so poorly unknown that Skinnerbox have sprawled a notice on their SoundCloud banner explaining there are no track IDs, because they’re not playing tracks. (I actually hear this confusion a lot with live tracks.)

Here’s what the whole set sounds like:

I talked to Iftah a bit about playing. The rig:

Olaf on Minimoog model D
Ableton Live with effects for the Minimoog
Iftah on his homemade setup – Arduino-based controlled, two monomes, custom Max for Live patches for sequencing and sample slicing (quite a lot of live sample manipulation going on).

Iftah notes the inspiration of Brian Crabtree’s monome patches, namely mlr and mythologic.

Skinnerbox aren’t just championing live performance in their shows; they’re also sharing tools for such. Their 2009 sbx 2049 drum machine was one of the first collaborations between Ableton and artists. In 2014 they released the Time & Timbre drum machine, which i think remains one of the best examples of how a computer drum machine can aid live performance and generate ideas. Even with so many Max for Live creations out there, this is one you should definitely try.

Speaking of Time & Timbre, they recently showed how it can be combined with analog modular via the CV LFO now included in 2.0 (have to cover all of this in more detail later):

For more background on their live sets, here’s a session recorded at the pool, with monome meeting Minimoog:

the bali sessions from skinnerbox promo videos on Vimeo.

And from 2015, Skinnerbox (they’ve been Fusion regulars):

skinnerbox live fusion festival 2015 from skinnerbox promo videos on Vimeo.

And last summer’s Plötzlich am Meer:

They’re even crazy enough to play live … for twelve hours.

And yes, I love the monologue in the 2016 Fusion Festival set, which seemed to have a welcome message for attendees (cue to about 45:00): “Happiness the brand is not happiness … Smile at a stranger and mean it; lose your s***”

Finally, if you want to vicariously live Fusion more (or relive it), the fine folks of German-language blog Berlin ist Techno have put together a playlist with all the sets they’ve found uploaded so far:

Now… back to plotting my next live set. And… sleeping after Fusion.

The post Hear the epic live set Skinnerbox played at Fusion Festival appeared first on cdm createdigitalmusic.