You’ve heard Justin Bieber mangled into gorgeous ambient cascades of sound. Now, you can experience the magic of PaulStretch as a free plug-in.
It may give you that “A-ha” moment in ambient music. You know:
The developer has various warnings about using this plug-in, which for me make me want to use it even more. (Hey, no latency reporting to the DAW? Something weird in Cubase! No manual? Who cares! Let’s give it a go – first I’m going to run with scissors to grab a beer which I’ll drink at my laptop!)
The plugin is only suitable for radical transformation of sounds. It is not suitable at all for subtle time corrections and such. Ambient music and sound design are probably the most suitable use cases.
You had me at radical / not subtle.
Okay… yeah, this was probably meant for me:
You can use it two ways: either load an audio file, and just run PaulStretch in your DAW, or use it as a live processor on inputs. (That’s weird, given what it does – hey, there was some latency. Like… a whole lot of latency.)
It’s on Mac and Windows but code is available and Linux is “likely.”
Maschine’s Audio module has arrived, with looping and time stretching. And that makes the perfect time to look at some new ways of playing Maschine.
Maschine has had a year full of growth – new features, new ways of working from the community. As of Friday (well, after some glitches with the update server), that also includes an update that delivers a feature Maschine users have been asking about the longest: pitch-independent time stretching and looping.
The bad news is, this isn’t integrated with Maschine’s existing Sampler module. The good news, perhaps, is that this means the new module is focused on its own set of functionality, and won’t disrupt what’s already there. (I’m going to play around with it a while longer to reach my own conclusions on how I feel about this decision, but it certainly does keep each module cleaner and simpler.)
I’ve seen a lot of people posting the sentiment lately that music making isn’t just about updating to the latest-and-greatest — and I certainly agree with that, that’s fair. But some updates do come from real user needs and remove technological barriers to things you want to do.
On the human side of the equation, of course, you’ve got all the ways people pick up an instrument and make it their own. And the Maschine community this year has been astounding – all the reviewers, users, experts, trainers, and yes even the Maschine team themselves.
So, for starters, here’s a great demonstration of how that Audio Module works:
(Ha, that musical example is a bit wacky, but… you can of course apply this to whatever music or genre you want; I’ve done some really experimental stuff on Maschine that I suspect no one would guess was that tool)
From the same creator (“loopop”), here’s a unique take on how to use Maschine Jam, the clip launching grid + touch fader hardware for Maschine, alongside the traditional Maschine hardware. He takes on Jam as a “virtual conductor,” a mixer for different parts, and even an easy way to strum instruments. It’s a reminder that it’s best to think of Maschine as a live interface, not something specific to a particular genre. And the result is something different than what I’ve seen from other interfaces (like Abletoh Push), demonstrating how many different directions live interfaces for computers can go.
Maschine has also worked well as a hub for other instruments – hardware and software alike. It can be a trigger for snapshots in Reaktor, as we saw in our run-down of Belief Defect. (I’m reprogramming my own Reaktor-based setup, so I’ll do a more complete tutorial soon.)
And you can use snapshots and morphing with hardware, as loopop shows in this video. This was initially a Jam feature, but it has extended to other hardware controller.
(I just played right before Grebenstein Friday night, and he was using a Maschine MK1 alongside the Vermona as his live rig, so more possibilities with this setup. It blew me away; it was really tight.)
This next example is worth another story on itself – I’m a huge fan of Reactable’s recent, overlooked apps for sequencing and drum pattern creation. The latter, SNAP, has integration with Maschine Jam. The upshot: instead of repeating the same old loop over and over and over and now I’m bored, you can work in a fluid, live way to create more human, varying patterns. Watch – the Jam stuff kicks in part of the way through:
Stepping outside of one genre can often help you to better understand techniques and musicality. So here’s DDS with a great series on Maschine from the perspective of a hip-hop producer. (If you make hip hop-influenced music, that’s already relevant – but even if not, listen to the producers of the genre that gave you so much of how we think about this hardware in the first place!)
You can already sample and slice with Native Instruments’ groove production instrument. But soon, you’ll change loops’ pitch and time in real-time, too.
Maschine has been guided by focusing on certain means of working, ignoring others. The hardware/software combination from the start began with an MPC-style sampling workflow and drum machine features, and it’s added from there – eventually getting features like more elaborate pattern generation and editing, drum synths, more sound tools, and deeper arrangement powers.
But hang on – that’s not really an excuse for not doing time stretching. Real-time time stretching has been a feature on many similar hardware and software tools.
Now, it’s sort of nice that Maschine isn’t Ableton Live. In fact, it’s so nice that the combination of the two is one of the most common use cases for Maschine. But it’s so expected that you’d be able to work with changing pitch and time independently with loops, that it’s almost distracting when it isn’t there.
So, Maschine 2.7 adds that functionality. In addition to the existing Sampler, which lets you trigger sounds and loops and slice audio into chunks, there’s now an Audio plug-in device you can add to your projects. Audio will play loops in time with the project, and has the ability to time stretch in real-time.
The features we’re getting:
Real-time time stretching keeps loops in time with a project, without changing pitch
Loop hot swapping lets you change loops as you play – apparently without missing a beat, so you can audition lots of different loops or trigger different loops on the fly
Gate Mode lets you play a loop just by hitting a pad
Melodic re-pitching lets you change pitch in Gate Mode of a whole loop or portion of a loop, just by playing pads
Gate Mode: trigger loops, change pitch, from pads.
The combination of pads and Gate Mode sounds really performer-friendly, and different from what you see elsewhere. That’s crucial, because since you can already do a lot of this in other tools, you need some reason to do it in Maschine.
I’m eager to get my hands on this and test it. It’s funny, I had some samples I wanted to play around with in the studio just before I saw this, and decided not to use Maschine because, well, this was missing. But because the pads on the Maschine MK3 hardware feel really, really great, and because sometimes you want to get hands-on with material using something other than the mouse, I’m intrigued by this. I find this sort of way of working can often generate different ideas. I’m sure a lot of you feel the same way. Actually, I know you do, because you’ve been yelling at NI to do this since the start. It looks like the wait might pay off with a unique, reflective implementation.
We’ll know soon enough – stay tuned.
The old way of doing things: the Sampling workflow:
Akai Professional and Retronyms have a free update to iMPC Pro – their MPC workstation app for iPad – that adds two major new features: Loops Slicing and Time Stretching. Here are the updates in iMPC Pro, Version 1.3: Added … Continue reading →
Max has for years been a favored choice of musicians and artists wanting to make their own tools for their work. But it’s been on a journey over more recent years to make that environment ever more accessible to a wider audience of people.
Loads of pitch correction and harmonization and pitch effects, straight out of the box.
Use Max for Live patches directly – even without a copy of Ableton.
Use video and audio media directly, without having to make your own player.
Use VST, AU plug-ins seamlessly, plus Max for Live patches – even without a copy of Ableton.
Make video and 3D patches more quickly, with physics and easy playback, all taking advantage of hardware acceleration on your GPU.
And what’s new in detail, as well as why it matters:
There’s a new UI. You’ll notice this first – gray toolbars ring the window. Somehow, they do so without looking drab. Objects are on the top, where they’ve been since the beginning, but now media files (like audio) are on the left, view options are on the bottom, and the inspector and help and other contextual information is on the right. (That’s all customizable, but so far everyone I’ve talked to has been happy with the default.) Max also recalls your work environment, so you can pick up where you left off – and it recovers from crashes, too.
My favorite feature: you can theme UIs with consistent styles.
You can browse and collect files easily. Clearly inspired by browsers like the one in Ableton Live, there’s a file browser for quick access to your content, and you can collect files in folders and the like from anywhere and drop them in. This isn’t the first version of Max with such a feature, but it’s the first one that makes managing files effortless. And you can tag and search in detail.
Reuse your patches as snippets. Got a set of objects you reuse a lot – like, for instance, one that plays back audio or manages a list? Select it, save it as a snippet, and then find it in that new browser. There are lots of examples snippets, too, interestingly pulled directly from the Max help documentation – so no more will you need to head to the help documentation and recreate what’s there.
Elastic audio – manipulate audio in pitch and time, separately. The Max and Pd family has been able to manipulate pitch and audio independently as long as it has had audio capabilities – provided you do the patching to make that work yourself. What’s changed is that it’s built in. Audio objects now support these features out of the box without patching. There’s a new higher-quality “stretch~” object that sounds the way we expect software to sound out of the box. And all of this interacts with a global transport.
This is of course useful to those making Max for Live creations for Ableton Live, as it means you can build in audio manipulation and everything will sync to a Live set. But it means something else: you might wind up building your own performance tool without even touching Ableton Live.
There’s a bunch of modular stuff included. Can’t afford a big rack of modulars? No room for hardware and cables? The Beap modules are now included, which let you combine software modules in much the same way you would physical modules.
Then if you do use hardware modulars, you can output the same signal via a compatible audio interface and control that.
Use media. Media files now have their own players, with clip triggering and playlist creation. Making a VJ tool, for instance, should now be stunningly easy, and working with audio playback (in combination with elastic audio) ridiculously straightforward.
Use plug-ins. VST, AU, straight out of the box, with the ability to customize which parameters you see. Max is now a powerful plug-in host – made more so by its ability to save and recall patcher and plug-in parameters in “Snapshots.”
Use Max for Live devices directly. No copy-and-paste – you can now open Max for Live patches even without owning a copy of Ableton Live. That’s another reason patchers may wind up just building their own performance environment.
To get you started, a bunch of classic Max for Live devices are included (like Pluggo), plus a whole mess of pitch shifters and players, vocoders, and elastic audio instrument/effects.
AutoTune the patch. retune~ is an intonation / harmonization object – what’s known colloquially to the rest of the world as “AutoTuning” (apologies to Antares for abusing their trademark). T-Pain, Max/MSP edition? There’s also a correction/harmonization device for Max for Live.
Use the Web. You can now embed the open source Chromium browser inside your patches (the WebKit-based embedded browser engine that’s used in Google Chrome), and use data from the Internet.
This is the version of Max visual users have been waiting for. I’ve saved some of the best for last. Jitter has long been the somewhat ugly stepsister of the audio stuff in Max, and it’s lately been showing its age. No more. Max 7 looks like it’s worth the wait. This is at last a version of Max that’s fully hardware-accelerated for video and easy to use.
Video playback and capture are rendered directly to 3D hardware, rather than getting bottlenecked on the CPU – and you can decode on your GPU. (Mac only for now, but Windows support is coming.)
A single jit.world object consolidates the stuff you need to output to the display – complete with physics and OpenGL 3D/texture support.
Video input and output syncs automatically, rather than requiring separate metro objects.
Make your own objects in gen.
Use a massively-enhanced collection of live video modules (which interface with those modular objects, too).
Patch faster. Keyboard shortcuts quickly create and connect objects (at last!), you can zoom around the cursor with the ‘z’ key, and quickly apply transforms to patches.
No more runtime. The unlicensed version of Max opens patches and edits them; it just doesn’t save.
Available now. 30-day free trial, upgrades are US$149, and you can now subscribe for $9.99 a month.
The only thing you’ll be waiting on for a little while is, unfortunately, full Ableton Live support; no timeline on when Ableton users will see Max 7. I know they’ll want it with all those elastic audio features.
I don’t think there’s any doubt: Max is now the patching environment to beat, by far. Nothing else is anywhere close to this broad, and now nothing comes anywhere close to being this usable.
That doesn’t mean I think other environments should try to be Max.
For most music users, the big rivals remain Native Instruments’ Reaktor and Max’s own cousin Pd, and there’s still room for them.
Reaktor may be a lot narrower than Max, but it’s also still a terrific tool if you just want to build an instrument or effect quickly. It also has some rather nice granular tools. It is looking long in the tooth, though, and I’d like to see Native Instruments treat this release more seriously. It’s hard to put work and time into Reaktor patching knowing that Native Instruments won’t provide you any sort of runtime to share your work – anyone wanting to use it has to go buy Reaktor or Komplete. And good as those instrument/effects tools are, Reaktor’s media management for samples is appallingly bad. In fact, until Reaktor fixes that area, it’s hard not to imagine some people jumping ship for Max – especially with built-in plug-in support.
Pure Data (Pd) is a different animal. Max 7 is another reminder of why we need Pd. Even though both originally leapt from the mind of Miller Puckette, they’ve evolved into radically distinct beasts. It’s a bit like coming back to Galapagos Island after twenty years and finding one of your turtles has evolved into a space dragon while the other one became a washer/dryer. It isn’t just that Pd is free and open source software, it’s that it’s engineered in such a way that makes that an advantage. Pd is tiny, even as Max is huge. That makes Pd well-suited to embedding in apps and games, mobile and desktop, software and hardware, when Max can do nothing of the sort. Max 7 is also, however, a painful reminder that Pd needs a new UI. Maybe it should also be minimal (a Web-powered UI would sure make sense). But the time is now. (And a desktop Pd really wants plug-in hosting, but that’s another story.) My dream at the moment would certainly be that each becomes effortless enough to use that I can spend some proper quality time in both.
There are, of course, many other ways to solve problems in code and patchers, so I won’t go into all of them. But there, we live in a wonderful time for DIY creative tools. It doesn’t have to be a time drain, it doesn’t have to painful.
It turns out that software about nothing can be for more or less anyone.
We’ll look more in detail at Max soon; I’ve got some interviews lined up for when I’m back around Berlin.
This video shows the work in progress that is live synthesis in Photosounder. At the moment the latest (unreleased) build of Photosounder can synthesise directly what’s under the mouse cursor, meaning you can play a sound at any rate without changing the pitch.
In the future it will a lot do more using the same synthesis technique. The first public release containing this feature will be released in a few weeks. Also at some point the mouse cursor is out of sync with what you hear, that has to do with the video recording software being out of sync, not Photosounder.