Harder software: Reason Rack Extensions, in actual hardware racks

Once upon a time, Propellerhead ran an ad showing a bunch of hardware synths in a trash bin to make a point. This time, we get the opposite – a KORG Polysix for Reason running back in hardware.

By now, these arguments about analog versus digital, software versus hardware are all surely irrelevant to music making. But recent developments go one step further: they produce an environment in which inventors and developers no longer have to care. The vision is, make your cool effects and synthesis code, then freely choose to run them inside a host (like Reason), inside hardware. or even on the Web.

Propellerhead showed me some of these possibilities of their Rack Extension technology when I visited them this winter and talked to their developers.

You can actually try the Web side of this right now – Europe for Reason runs in a browser. It’s not just a simulation or a demo; it’s the complete Rack Extension. That clearly offers a new take on try-as-you-buy, and allows new possibilities in teaching and documentation – all without threatening the sales model for the software:

https://www.propellerheads.com/europa

A browser may be a strange place to experience the possibility of Rack Extensions running in hardware, but it’s actually the same idea. ELK MusicOS from MINDMusicLabs allows the same tech to run on a Linux-based operating system, on any hardware you want. So if you want self-contained instruments with knobs and faders and buttons – and no distracting screens or awkward keyboards – you can do it.

It’s not so much post-PC as it is more PC than your PC. Computers should be capable of ultra-low latency, reliable operation, even running general purpose systems. The problem is, musicians aren’t the ones calling the shots.

MusicOS can cut latency below 1ms round-trip, runs on single Intel and ARM CPUs, and has official support for VST and Rack Extensions – plus full support for connectivity (USB, WiFi, Bluetooth, 4G mobile data, and MIDI).

What was cool at Superbooth was seeing some recognizable hardware prototypes using the technology. We saw a VST plug-in just before the show from Steinberg; for the Rack Extension side of things, you get a Eurorack module version of KORG’s Polysix, using their own Component Modeling Technology. (So it is a software model, but here with hands-on control and modular connectivity.)

For now, it’s just a prototype. But Rack Extension support, like VST, is officially part of MusicOS. Now it’s just up to hardware makers to take the plunge. Based on interest we saw from CDM readers and heard around the show, there is serious market potential here.

In other words, this could be the sign of things to come. ELK’s tech works i such a way that more or less the same code can target custom hardware as desktop software. And compatible systems on a chip start around ten bucks – meaning this could be an effective platform for a lot of people. (I’m not clear on how much licensing costs; ELK ask interested developers to ‘get in touch’ so it may be negotiated case by case.)

https://www.mindmusiclabs.com/powered-by-elk-eurorack-synth/

https://www.mindmusiclabs.com/

Previously:

Hardware VST? Steinberg Retrologue plug-in gets physical version

The post Harder software: Reason Rack Extensions, in actual hardware racks appeared first on CDM Create Digital Music.

Your music software goes modular: builder-friendly Bitwig 3 beta is here

It may have been in the temple of wires and racks, but Berlin’s Bitwig chose this weekend’s Superbooth to launch a public beta of their all-modular DAW, Bitwig Studio 3. It lets you wire together with hardware, or just inside software, or as a combination.

It’s called The Grid – and it’s all about patching inside your music workflow, so you can construct stuff you want instead of dialing up big monolithic tools and presets. And that sounds great to builders, I’m sure.

Going modular was really the promise of Bitwig Studio from the start – something to rocket the software from “oh, hey, I can run something kinda like Ableton on Linux” to … “wow, this is something really special.”

The idea is, get a music making tool that not only behaves like a set of tracks and channels, or a bank of patterns and samples, and more like a toybox that lets you built whatever you want from various blocks. And before anyone tries to launch another of those “hardware versus software” debates (yawn), a friendly reminder that computers used a modular generator model for digital audio in the late 1950s – years before any recognizable hardware modular was even a thing. (Okay, granted, you needed a stack of punch cards and access to an IBM mainframe or two and the user base was something like ‘people who happen to know Max Mathews,’ but still…)

Bitwig Studio 3 is in beta now, so you can toy around with it and see what you think. (Bitwig are very clear about not putting important projects in there.)

I wrote about this at the start of this year.

Bitwig Studio is about to deliver on a fully modular core in a DAW

But now there’s a friendly video to walk you through how it all works:

Basically, think friendly musical blocks for pattern and timbre, pre-cords so things are patched easily, and powerful features with phase.

With Beta 1, we also see some specifics – you can produce your own stereo synths and effects with the two Grid devices:

Patching may be a nerdy endeavor, but Bitwig’s design makes it much friendlier – and there’s already great tutorial documentation even in the beta.

Poly Grid: “for creating synthesizers, sequenced patches (like a beatbox), droning sounds, etc.”

FX Grid for effects

Signal/modulation I/O – including pressure, CV (like from hardware)

Visualization (labels, VU, readouts)

Phase – loads of stuff here, as promised: Phasor, Ø Bend, Ø Reset, Ø Scaler, Ø Reverse, Ø Wrap, Ø Counter, Ø Formant, Ø Lag, Ø Mirror, Ø Shift, Ø Sinemod, Ø Skew, Ø Sync

Oscillators (including Swarm, Sampler)

Random

LFO

Envelope / follower

Shaper (ooh, Chebyshev, Distortion, Quantizer, Rectifier, Wavefolder)

Filter (Low-pass LD, Low-pass SK, SVF, High-pass, Low-pass, Comb)

Delay types – need to dig into these; they look promising

Mix – Blend, Mixer, LR Mix, Select, Toggle, Merge, Split, Stereo Merge, Stereo Split, Stereo Width

Level – Level, Value, Attenuate, Bias, Drive, Gain, AM/RM, Average, Bend, Clip, Hold, Lag, Sample / Hold, Level Scaler, Bi→Uni, Uni→Bi

Pitch scalers and tools

Math operators

Logic operators

— all in all, it’s a really nice selection of tools, and a balance of low-level signal tools/operators and easy convenience tools that are higher level. And it’s also not an overwhelming number – which is good; it’s clear this should be its own tool and not try to replicate the likes of Max, SuperCollider, and Reaktor.

More improvements

Also in this build:

Reworked audio backends for every OS (good)

UI overhaul

Ableton Link 3 support with transport start/stop sync

And – a little thing, but you can view the timeline with actual time (minutes, ms) …

More on this soon.

Beta users will find a really nice, complete tutorial so – you can start practicing building. Have fun!

BITWIG STUDIO 3: NOW IN BETA

The post Your music software goes modular: builder-friendly Bitwig 3 beta is here appeared first on CDM Create Digital Music.

Surge is free, deep synth for every platform, with MPE support

Surge is a deep multi-engine digital soft synth – beloved, then lost, then brought back to life as an open source project. And now it’s in a beta that’s usable and powerful and ready on every OS.

I wrote about Surge in the fall when it first hit a free, open source release:

Vember Audio owner @Kurasu made this happen. But software just “being open sourced” often leads nowhere. In this case, Surge has a robust community around it, turning this uniquely open instrument into something you can happily use as a plug-in alongside proprietary choices.

And it really is deep: stack 3 oscillators per voice, use morphable classic or FM or ring modulation or noise engines, route through a rich filter block with feedback and every kind of variation imaginable – even more exotic notch or comb or sample & hold choices, and then add loads of modulation. There are some 12 LFOs per voice, multiple effects, a vocoder, a rotary speaker…

I mention it again because now you can grab Mac (64-bit AU/VST), Windows (32-bit and 64-bit VST), and Linux (64-bit VST) versions, built for you.

And there’s VST3 support.

And there’s support for MPE (MIDI Polyphonic Expression), meaning you can use hardware from ROLI, Roger Linn, Haken, and others – I’m keen to try the Sensel Morph, perhaps with that Buchla overlay.

Now there’s also an analog mode for the envelopes, too.

This also holds great promise for people who desire a deep synth but can’t afford expensive hardware. While Apple’s approach means backwards compatibility on macOS is limited, it’ll run on fairly modest machines – meaning this could also be an ideal starting point for building your own integrated hardware/software solution.

In fact, if you’re not much of a coder but are a designer, it looks like design is what they need most at this point. Plus you can contribute sound content, too.

Most encouraging is really that they are trying to build a whole community around this synth – not just make open source maintenance a chore, but really a shared endeavor.

Check it out now:

https://surge-synthesizer.github.io

Previously:

Powerful SURGE synth for Mac and Windows is now free

The post Surge is free, deep synth for every platform, with MPE support appeared first on CDM Create Digital Music.

Alternative modular: pd knobs is a Pure Data-friendly knob controller

pd knobs is a knob controller for MIDI. It’s built with Teensy with open source code – or you can get the pre-built version, with some pretty, apparently nice-feeling knobs. And here it is with the free software Pd + AUTOMATONISM – proof that you don’t need to buy Eurorack just to go modular.

And that’s relevant, actually. Laptops can be had for a few hundred bucks; this controller is reasonably inexpensive, or you could DIY it. Add Automatonism, and you have a virtually unlimited modular of your own making. I love that Eurorack is supporting builders, but I don’t think the barrier to entry for music should be a world where a single oscillator costs what a lot of people spend in a month on rent.

And, anyway, this sounds really cool. Check the demo:

From the creator, Sonoclast:

pd knobs is a 13 knob MIDI CC controller. It can control any software that recognizes MIDI CC messages, but it was obviously designed with Pure Data in mind. I created it because I wanted a knobby interface with nice feeling potentiometers that would preserve its state from session-to-session, like a hardware instrument would. MIDI output is over a USB cable.

For users of the free graphical modular Pd, there are some ready-to-use abstractions for MIDI or even audio-rate control. You can also easily remap the controllers with some simple code.

More:

http://sonoclast.com/products/pd-knobs/

Buy from Reverb.com:

https://reverb.com/item/21147215-sonoclast-pd-knobs-midi-cc-controller

The post Alternative modular: pd knobs is a Pure Data-friendly knob controller appeared first on CDM Create Digital Music.

How to make a multitrack recording in VCV Rack modular, free

In the original modular synth era, your only way to capture ideas was to record to tape. But that same approach can be liberating even in the digital age – and it’s a perfect match for the open VCV Rack software modular platform.

Competing modular environments like Reaktor, Softube Modular, and Cherry Audio Voltage Modular all run well as plug-ins. That functionality is coming soon to a VCV Rack update, too – see my recent write-up on that. In the meanwhile, VCV Rack is already capable of routing audio into a DAW or multitrack recorder – via the existing (though soon-to-be-deprecated) VST Bridge, or via inter-app routing schemes on each OS, including JACK.

Those are all good solutions, so why would you bother with a module inside the rack?

Well, for one, there’s workflow. There’s something nice about being able to just keep this record module handy and grab a weird sound or nice groove at will, without having to shift to another tool.

Two, the big ongoing disadvantage of software modular is that it’s still pretty CPU intensive – sometimes unpredictably so. Running Rack standalone means you don’t have to worry about overhead from the host, or its audio driver settings, or anything like that.

A free recording solution inside VCV Rack

What you’ll need to make this work is the free NYSTHI modules for VCV Rack, available via Rack’s plug-in manager. They’re free, though – get ready, there’s a hell of a lot of them.

Big thanks to chaircrusher for this tip and some other ones that informed this article – do go check his music.

Type “recorder” into the search box for modules, and you’ll see different options options from NYSTHI – current at least as of this writing.

2 Channel MasterRecorder is a simple stereo recorder.
2 Channel MasterReocorder 2 adds various features: monitoring outs, autosave, a compressor, and “stereo massaging.”
Multitrack Recorder is an multitrack recorder with 4- or 8-channel modes.

The multitrack is the one I use the most. It allows you to create stems you can then mix in another host, or turn into samples (or, say, load onto a drum machine or the like), making this a great sound design tool and sound starter.

This is creatively liberating for the same reason it’s actually fun to have a multitrack tape recorder in the same studio as a modular, speaking of vintage gear. You can muck about with knobs, find something magical, and record it – and then not worry about going on to do something else later.

The AS mixer, routed into NYSTHI’s multitrack recorder.

Set up your mix. The free included Fundamental modules in Rack will cover the basics, but I would also go download Alfredo Santamaria’s excellent selection , the AS modules, also in the Plugin Manager, and also free. Alfredo has created friendly, easy-to-use 2-, 4-, and 8-channel mixers that pair perfectly with the NYSTHI recorders.

Add the mixer, route your various parts, set level (maybe with some temporary panning), and route the output of the mixer to the Audio device for monitoring. Then use the ‘O’ row to get a post-fader output with the level.

(Alternatively, if you need extra features like sends, there’s the mscHack mixer, though it’s more complex and less attractive.)

Prep that signal. You might also consider a DC Offset and Compressor between your raw sources and the recording. (Thanks to Jim Aikin for that tip.)

Configure the recorder. Right-click on the recorder for an option to set 24-bit audio if you want more headroom, or to pre-select a destination. Set 4- or 8-track mode with the switch. Set CHOOSE FILE if you want to manually select where to record.

There are trigger ins and outs, too, so apart from just pressing the START and STOP buttons, you can either trigger a sequencer or clock directly from the recorder, or visa versa.

Record away! And go to town… when you’re done, you’ll get a stereo WAV file, or a 4- or 8-track WAV file. Yes, that’s one file with all the tracks. So about that…

Splitting up the multitrack file

This module produces a single, multichannel WAV file. Some software will know what to do with that. Reaper, for instance, has excellent multichannel support throughout, so you can just drag and drop into it. Adobe’s Audition CS also opens these files, but it can’t quickly export all the stems.

Software like Ableton Live, meanwhile, will just throw up an error if you try to open the file. (Bad Ableton! No!)

It’s useful to have individual stems anyway. ffmpeg is an insanely powerful cross-platform tool capable of doing all kinds of things with media. It’s completely free and open source, it runs on every platform, and it’s fast and deep. (It converts! It streams! It records!)

Installing is easier than it used to be, thanks to a cleaned-up site and pre-built binaries for Mac and Windows (plus of course the usual easy Linux installs):

https://ffmpeg.org/

Unfortunately, it’s so deep and powerful, it can also be confusing to figure out how to do something. Case in point – this audio channel manipulation wiki page.

In this case, you can use the map channel “filter” to make this happen. So for eight channels, I do this:

ffmpeg -i input.wav -map_channel 0.0.0 0.wav -map_channel 0.0.1 1.wav -map_channel 0.0.2 2.wav -map_channel 0.0.3 3.wav -map_channel 0.0.4 4.wav -map_channel 0.0.5 5.wav -map_channel 0.0.6 6.wav -map_channel 0.0.7 7.wav

But because this is a command line tool, you could create some powerful automated workflows for your modular outputs now that you know this technique.

Sound Devices, the folks who make excellent multichannel recorders, also have a free Mac and Windows tool called Wave Agent which handles this task if you want a GUI instead of the command line.

https://www.sounddevices.com/products/accessories/software/wave-agent

That’s worth keeping around, too, since it can also mix and monitor your output. (No Linux version, though.)

Record away!

Bonus tutorial here – the other thing apart from recording you’ll obviously want with VCV Rack is some hands-on control. Here’s a nice tutorial this week on working with BeatStep Pro from Arturia (also a favorite in the hardware modular world):

I really like this way of working, in that it lets you focus on the modular environment instead of juggling tools. I actually hope we’ll see a Fundamental module for the task in the future. Rack’s modular ecosystem changes fast, so if you find other useful recorders, let us know.

https://vcvrack.com/

Previously:

Step one: How to start using VCV Rack, the free modular software

How to make the free VCV Rack modular work with Ableton Link

The post How to make a multitrack recording in VCV Rack modular, free appeared first on CDM Create Digital Music.

Azure Kinect promises new motion, tracking for art

Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.

Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.

And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.

History time

A full ten years ago, I was writing about the Microsoft project and interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.

For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.

We’re now on the third major revision of the camera hardware.

2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.

2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).

Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.

Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.

2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.

  • Active IR tracking in the dark
  • Wider field of vision
  • 6 skeletons (people) instead of two
  • More tracking features, with additional joints and creepier features like heart rate and facial expression
  • 1080p color camera
  • Faster performance/throughput (which was key to more expressive results)

Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”

And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.

Everything old is new again

I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)

It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.

It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.

So why am I even bothering to write this?

Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.

So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.

And oh yeah – the next generation looks very powerful.

Kinect: The Next Generation

Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.

That is definitely backwards from how this is normally meant to work.

But the good news here is unexpected. Kinect was lost, and now is found.

The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.

So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.

And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.

For a really good write-up, you’ll want to read this great run-down:


All you need to know on Azure Kinect
[The Ghost Howls, a VR/tech blog, see also a detailed run-down of HoloLens 2 which also just came out]

Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.

Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.

1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer

And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)

Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).

Downers: 30 fps operation, limited range.

Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:

That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.

And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.

All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.

Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.

So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.

Azure Kinect DK preorder / product page

aka.ms/kinectdocs

The post Azure Kinect promises new motion, tracking for art appeared first on CDM Create Digital Music.

Bitwig Studio is about to deliver on a fully modular core in a DAW

Bitwig Studio may have started in the shadow of Ableton, but one of its initial promises was building a DAW that was modular from the ground up. Bitwig Studio 3 is poised to finally deliver on that promise, with “The Grid.”

Having a truly modular system inside a DAW offers some tantalizing possibilities. It means, in theory at least, you can construct whatever you want from basic building blocks. And in the very opposite of today’s age of presets, that could make your music tool feel more your own.

Oh yeah, and if there is such an engine inside your DAW, you can also count on other people building a bunch of stuff you can reuse.

Why modulaity? It doesn’t have to just be about tinkering (though that can be fun for a lot of people).

A modular setup is the very opposite of a preset mentality for music production. Experienced users of these environments (software especially, since it’s open-ended) do often find that patching exactly what they need can be more creative and inspirational. It can even save time versus the effort spent trying to whittle away at a big, monolithic tool just go get to the bit you actually want. But the traditional environments for modular development are fairly unfriendly to new users – that’s why very often people’s first encounters with Max/MSP, SuperCollider, Pd, Reaktor, and the like is in a college course. (And not everyone has access to those.) Here, you get a toolset that could prove more manageable. And then once you have a patch you like, you can still interconnect premade devices – and you can work with clips and linear arrangement to actually finish songs. With the other tools, that often means coding out the structure of your song or trying to link up to a different piece of software.

We’ve seen other DAWs go modular in different ways. There’s Apple Logic’s now mostly rarely-used Environment. There’s Reason with its rich, patchable rack and devices. There’s Sensomusic Usine, which is a fully modular DAW / audio environment, and DMX lighting and video tool – perhaps the most modular of these (even relative to Bitwig Studio and The Grid). And of course there’s Ableton Live with Max for Live, though that’s really a different animal – it’s a full patching development environment that runs inside Live via a runtime, and API and interface hooks that allow you to access its devices. The upside: Max for Live can do just about everything. The downside: it’s mostly foreign to Ableton Live (as it’s a different piece of software with its own history), and it could be too deep for someone just wanting to build an effect or instrument.

So, enter The Grid. This is really the first time a relatively conventional DAW has gotten its own, native modular environment that can build instruments and effects. And it looks like it could be accomplished in a way that feels comfortable to existing users. You get a toolset for patching your own stuff inside the DAW, and you can even mix and match signal to outboard hardware modular if that’s your thing.

And it really focuses on sound applications, too, with three devices. One is dedicated to monophonic synths, one to polyphonic synths, and one to effects.

From there, you get a fully modular setup with a modern-looking UI and 120+ modules to choose from.

They’ve done a whole lot to ease the learning curve normally associated with these environments – smoothing out some of the wrinkles that usually baffle beginners:

You can patch anything to anything, in to out. All signals are interchangeable – connect any out to any in. Most other software environments don’t work that way, which can mean a steeper learning curve. (We’ll have to see how this works in practice inside The Grid).

Any in can go to any out – reducing some of the complexity of other patching environments (software and hardware alike).

Everything’s stereo. Here’s another way of reducing complexity. Normally, you have to duplicate signals to get stereo, which can be confusing for beginners. Here, every audio cable and every control cable routes stereo.

Everything’s also in living stereo, reducing cable count and cognitive effort.

There are default patchings. Funny enough, this idea has actually been seen on hardware – there are default routings so modules automatically wire themselves if you want, via what Bitwig calls “pre-cords.” That means if you’re new to the environment, you can always plug stuff in.

They’ve also promised to make phase easier to understand, which should open up creative use of time and modulation to those who may have been intimidated by these concepts before.

“Pre-cords” mean you can easily add default patchings to get stuff working straight away.

What fun is a modular tool if you can’t explore phase? Bitwig say they’ve made this concept more accessible to modulation and easier to learn.

There’s also a big advantage to this being native to the environment – again, something you could only really say about Sensomusic Usine before now (at least as far as things that could double as DAWs).

This unlocks:

  • Nesting and layering devices alongside other Bitwig devices
  • Full support from the Open Controller API. (Wow, this is a pain the moment you put something like Reaktor into another host, too.)
  • Route modulation out of your stuff from The Grid into other Bitwig devices.
  • Complete hardware modular integration – yeah, you can mix your software with hardware as if they’re one environment. Bitwig says they’ve included “dedicated grid modules for sending any control, trigger, or pitch signal as CV Out and receiving any CV In.”

I’ve been waiting for this basically since the beginning. This is an unprecedented level of integration, where every device you see in Bitwig Studio is already based on this modular environment. Bitwig had even touted that early on, but I think they were far overzealous with letting people know about their plans. It unsurprisingly took a while to make that interface user friendly, which is why it’ll be a pleasure to try this now and see how they’ve done. But Bitwig tells us this is in fact the same engine – and that the interface “melds our twin focus on modularity and swift workflows.”

There’s also a significant dedication to signal fidelity. There’s 4X oversampling throughout. That should generally sound better, but it also has implications for control and modularity. And it’ll make modulation more powerful in synthesis, Bitwig tells CDM:

With phase, sync, and pitch inputs on most every oscillator, there are many opportunities here for complex setups. Providing this additional bandwidth keeps most any patch or experiment from audible aliasing. As an open system, this type of optimization works for the most cases without overtaxing processors.

It’s stereo only, which puts it behind some of the multichannel capabilities of Reaktor, Max, SuperCollider, and others – Max/MSP especially given its recent developments. But that could see some growth in a later release, Bitwig hints. For now, I think stereo will keep us plenty busy.

They’ve also been busy optimizing, Bitwig tells us:

This is something we worked a lot on in early development, particularly optimizing performance on the oversampled, stereo paths to align with the vector units of desktop processors. In addition, the modules are compiled at runtime for the best performance on the particular CPU in use.

That’s a big deal. I’m also excited about using this on Linux – where, by the way, you can really easily use JACK to integrate other environments like SuperCollider or live coding tools.

If you’re at NAMM, Bitwig will show The Grid as part of Bitwig Studio 3. They have a release coming in the second quarter, but we’ll sit down with them here in Berlin for a detailed closer look (minus NAMM noise in the background or jetlag)!

Oh yeah, and if you’ve got the Upgrade Plan, it’s free.

This is really about making a fully modular DAW – as opposed to the fixed multitrack tape/mixer models of the past. Bitwig have even written up an article about how they see modularity and how it’s evolved over various release versions:

BEHIND THE SCENES: MODULARITY IN BITWIG STUDIO

More on Bitwig Studio 3:

https://www.bitwig.com/en/19/bitwig-studio-3

Obligatory:

Oh yeah, also Tron: Legacy seems like a better movie with French subtitles…

That last line fits: “And the world was more beautiful than I ever dreamed – and also more dangerous … hop in bed now, come on.”

Yeah, personal life / sleep … in trouble.

The post Bitwig Studio is about to deliver on a fully modular core in a DAW appeared first on CDM Create Digital Music.

Build your own scratch DJ controller

If DJing originated in the creative miuse and appropriation of hardware, perhaps the next wave will come from DIYers inventing new approaches. No need to wait, anyway – you can try building this scratch controller yourself.

DJWORX has done some great ongoing coverage of Andy Tait aka Rasteri. You can read a complete overview of Andy’s SC1000, a Raspberry Pi-based project with metal touch platter:

Step aside portablism — the tiny SC1000 is here

In turn, there’s also that project’s cousin, the 7″ Portable Scratcher aka 7PS.

If you’re wondering what portablism is, that’s DJs carrying portable record players around. But maybe more to the point, if you can invent new gear that fits in a DJ booth, you can experiment with DJing in new ways. (Think how much current technique is really circumscribed by the feature set of CDJs, turntables, and fairly identical DJ software.)

Or to look at it another way, you can really treat the DJ device as a musical instrument – one you can still carry around easily.

The SC1000 in Rasteri’s capable hands is exciting just to behold:

Everything you need to build this yourself – or to discover the basis for other ideas – is up on GitHub:

https://github.com/rasteri/SC1000/

This is not a beginner project. But it’s not overwhelmingly complicated, either. Basically…

Ingredients:
Custom PCB
System-on-module (the brains of the operation)
SD card
Enclosure
Jog wheel with metal capacitive touch surface and magnet
Mini fader

Free software powers the actual DJing. (It’s based on xwax, open source Linux digital vinyl emulation, which we’ve seen as the basis of other DIY projects.)

Process:

You need to assemble the main PCB – there’s your soldering iron action.

And you’ll flash the firmware (which requires a PIC programmer), plus transfer the OS to SD card.

Assembly of the jog wheel and enclosure requires a little drilling and gluing

Other than that it’s a matter of testing and connection.

Build tutorial:

Full open source under a GPLv2 license. (Andy sort of left out the hardware license – this really sort of illustrates that GNU need a license that blankets both hardware and software, though that’s complex legally. There’s no copyright information on the hardware; to be fully open it needs something like a Creative Commons license on those elements of the designs. But that’s not a big deal.)

It looks really fantastic. I definitely want to try building one of these in Berlin – will team up and let you know how it goes.

This clearly isn’t for everyone. But the reason I mention going to custom hardware is, this means both that you can adapt your own technique to a particular instrument and you can modify the way the digital DJ tool responds if you so choose. It may take some time before we see that bear fruit, but it definitely holds some potential.

Via:
Rasteri’s SC1000 scratch controller — build your own today [thanks to Mark Settle over at DJWORX!]

Project page:
https://github.com/rasteri/SC1000/

Thanks, Dubby Labby!

The post Build your own scratch DJ controller appeared first on CDM Create Digital Music.

Bitwig Studio 2.5 beta arrives with features inspired by the community

We’re coasting to the end of 2019, but Bitwig has managed to squeeze in Studio 2.5, with feature the company says were inspired by or directly requested by users.

The most interesting of these adds some interactive arrangement features to the linear side of the DAW. Traditional DAWs like Cubase have offered interactive features, but they generally take place on the timeline. Or you can loop individual regions in most DAWs, but that’s it.

Bitwig are adding interactive actions to the clips themselves, right in the arrangement. “Clip Blocks” apply Next Action features to individual clips.

Also in this release:

“Audio Slide” lets you slide audio inside clips without leaving the arranger. That’s possible in many other DAWs, but it’s definitely a welcome addition in Bitwig Studio – especially because an audio clip can contain multiple audio events, which isn’t necessarily possible elsewhere.

Note FX Selector lets you sweep through multiple layers of MIDI effects. We’ve seen something like this before, too, but this implementation is really nice.

There’s also a new set of 60 Sampler presets with hundreds of full-frequency waveforms – looks great for building up instruments. (This makes me ready to boot into Linux with Bitwig, too, where I don’t necessarily have my full plug-in library at my disposal.)

Other improvements:

  • Browser results by relevance
  • Faster plug-in scanning
  • 50 more functions accessible as user-definable key commands

To me, the thing that makes this newsworthy, and the one to test, is really this notion of an interactive arrangement view.

Ableton pioneered Follow Actions in their Session View years back in Ableton Live, but they’ve failed to apply that concept even inside Session View to scenes. (Some Max for Live hacks fill in the gap, but that only proves that people are looking for this feature.)

Making the arrangement itself interactive at the clip level – that’s really something new.

Now, that said, let’s play with Clip Blocks in Bitwig 2.5 and see if this is helpful or just confusing or superfluous in arrangements. (Presumably you can toy with different arrangement possibilities and then bounce out whatever you’ve chosen? I have to test this myself.) And there’s also the question of whether this much interactivity actually just has you messing around instead of making decisions, but that’s another story.

Go check out the release, and if you’re a Bitwig user, you can immediately try out the beta. Let us know what you think and how those Clip Blocks impact your creative process. (Or share what you make!)

Just please – no EDM tabla. (I think that moment sent a chill of terror down my spine in the demo video.)

https://www.bitwig.com/en/18/bitwig-studio-2_5.html

The post Bitwig Studio 2.5 beta arrives with features inspired by the community appeared first on CDM Create Digital Music.