Posts by :

ROLI, Makers of Seaboard Instrument, Just Bought The Leading C++ Audio Framework

Processed with VSCOcam with j6 preset

Here’s some important news that might impact you – even though you may never have heard of either the instrument maker or know anything about code libraries. Bear with us. But an experimental instrument builder and design shop just acquired the most popular framework used by audio developers, a set of free and open source gems.

The film explaining the announcement:

First, there’s ROLI. Now, to most of us in the music world, ROLI are the Dalston, London firm that make an alternative keyboard called the Seaboard – a sort of newer cousin to the Haken Continuum Fingerboard that uses foam that you press with your fingers to add expression and bend pitches. But ROLI wants you to think of them as a design shop focused on interaction. So they don’t just say “we’re going to go make weird instruments”; whether you buy the pitch or not, they say they want nothing less than to transform all human machine interaction.

And yes, there’s a film for that, too. (Those of you playing the startup drinking game, fair warning: the words “design” and “artisanal” appear in the opening moments, so you could wind up a bit unhealthy.)

ROLI isn’t a company forged in the mold of most music manufacturers. They’re most definitely a startup. They have an in-house chef and a Wellness Manager, even before they’re widely shipping their product. That’s in stark contrast to the steady growth rate of traditional music gear makers. (I’ve seen both Native Instruments and Ableton put up charts that show linear hiring over a period of many years. Many other makers were bootstrapped.) The difference: record US$12.8 million in round A funding. And as the Wall Street Journal noted at the time, that comes at a time when some big players (Roland) are seeing diminished sales.

With that additional funding, ROLI are being more aggressive. If it pays off, it could transform the industry. (And I’d say that’s true even – maybe especially if – they manage to use a musical instrument as a gateway to other non-musical markets.)

Processed with VSCOcam with se3 preset

So, they’re buying JUCE. JUCE is an enormous library that allows developers to easily build audio plug-ins (VST, VST3, AU, RTAS and AAX are supported), design user interfaces quickly with standard libraries, and handle audio tasks (MIDI, sound, and so on), networking, data, and other tasks with ease. A million lines of code and modular support for these features give those developers a robust toolkit of building blocks, saving them the chore of reinventing the wheel.

Just as importantly, the results can run across desktop and mobile platforms – OS X, Windows, Linux, iOS, and Android all work out of the box.

I couldn’t get solid stats in time for this story on how many people use JUCE, but it’s a lot. KORG, Pioneer, AKAI, and Arturia are just a few names associated with it. You almost certainly have used JUCE-powered software, whether you’re aware of it or not.

But ROLI aren’t just acquiring a nifty code toolkit for audio plug-in makers. JUCE’s capabilities cover a range of non-audio tasks, too, and include an innovative real-time C++ compiler. And they acquire not just the code, but its creator Julian Storer, who will become Head of Software Architecture for ROLI.

What does this mean for JUCE? Well, in the short term, it means more investment. Previously the work of Jules alone, a solitary genius of C++, JUCE will now have a team of people contributing, say ROLI. They will add staff to focus on developing the framework as Jules is named “Editor-in-Chief” of JUCE – a sort of project lead / curator for the tools. For that reason, it’s hard to see this as anything but good news for JUCE developers, for the time being. In fact, Jules is very clear about his ongoing role – and not much changes:

And for the foreseeable future, it’s still going to be me who either writes or approves every line of code that gets into the library. I’m hoping that within a couple of years we’ll have a team of brilliant coders who are all pumping out code that perfectly matches the quality and style of the JUCE codebase. But in the meantime, I’ll be guarding the repository with an iron fist, and nothing will be released until I’ve checked, cleaned and approved it myself. But even in the short-term, by having a team behind the scenes working on features, and with me acting as more of an “editor-in-chief” role to ensure that the code meets the required standard, we’ll be able to be a lot more productive without losing the consistency and style that is so important to JUCE’s success.

Read his full letter on the JUCE forum.

JUCE’s source is already open – most modules are covered by the GPL (v2 or v3, depending). You therefore pay only if you want to release closed-source code (or, given Apple’s restrictions, anything for iOS); commercial licenses are not expensive.

The murkier question is actually how this will evolve at ROLI. The word I heard was immediately “ecosystem.” In the Apple-centered tech world, it seems, everything needs to have an SDK – even new rubber keyboards – so ROLI may hope to please its investors with the move. And that makes some practical sense, too. In order to communicate with software instruments, the Seaboard needs to send high-resolution expression data; ROLI use System Exclusive MIDI. It’s now a no-brainer to wrap that directly into JUCE’s library in the hopes plug-in and software instrument makers bite and add support. What’s interesting about this is that it might skirt the usual chicken and egg problem – if adding compatibility is easy enough, instrument makers (always fans of curiosities anyway) may add compatibility before the Seaboard has a big installed base.

In fact, that in turn could be good for makers of other alternative instruments, too; ROLI are working on standardizing on methods for this kind of data.

Of course, that still depends on people liking the Seaboard instrument. And ROLI say their ambitions don’t stop at futuristic pianos. CEO/Founder Roland Lamb (that’s no relation to Roland the Japanese musical instrument company) paints a broader picture:

“At ROLI, our larger vision is to reshape interaction. To do that, we need to transform every technological link from where people create inputs, to where computers create outputs. That’s a tall order, but acquiring and investing in JUCE is our most significant step towards this challenging goal since we invented and developed the Seaboard.“

Now, I am frequently in public appearances making just this argument, that musical instrument interaction can lead to innovative design solutions in other areas. But in this case, I don’t know what it means. Whatever ROLI is working on beyond the Seaboard, we haven’t seen it. At least as far as JUCE, they’re building competency and assets that enable human/hardware communication in the software frameworks themselves. We’ll just have to see how that’s applied.

In coming weeks, I hope to look a little closer at how the Seaboard and other similar devices handle communication, and whether there’s a chance to make that area smarter for a variety of hardware. And I’ll also let you know what more we learn about ROLI and JUCE.

If you’re a developer, there are ongoing JUCE meetups planned, so you can check out the framework yourself or meet other existing users. These aren’t limited to London – Paris and Helsinki were recent tour stops, with Berlin (in Ableton HQ, no less) upcoming.

JUCE Meetup

JUCE site

JUCE Acquisition Press Release

https://www.roli.com/

The post ROLI, Makers of Seaboard Instrument, Just Bought The Leading C++ Audio Framework appeared first on Create Digital Music.

Bitwig Studio 1.1 Adds Lots of Details; Can It Escape Ableton’s Shadow?

bitwig_opener

Bitwig Studio has been quietly plugging along in development, adding loads of engineering improvements under the hood. Version 1.1 is the largest update yet.

Here’s the summary of the update:
https://www.bitwig.com/en/bitwig_1up

Minus the marketing speak, the exhaustive changelog (here, for Mac): http://www.bitwig.com/dl/8/mac

It’s an impressively long list of enhancements in quantity, though most of the changes are fixes and enhanced hardware and plug-in compatibility. For instance, you can side-chain VSTs, and there are new options for routing multiband effects and multi-channel plug-ins.

The big enhancements:

  • More routing for audio and MIDI
  • VST multi-out sidechain support and multi-channel effect hosts
  • Updated controller API
  • New Audio Receiver, Note Receiver, Note MOD, De-Esser devices

And you can genuinely deactivate devices to save CPU, something Live lacks, as well as take advantage of “true latency compensation.” (Whatever that means – that will require some testing. Bitwig’s explanation of what makes their tech different is that it actually works. That sounds good.) Some other features play catch-up with Ableton Live – tap tempo and crossfader, modulation and timestretching. But it’s a welcome update.

And as we’ve tangled recently with Ableton Live’s spotty controller support and the weird gymnastics required to make controllers work, it’s worth scolding Ableton for not making their hardware integration work better. Bitwig, with a sliver of the development resources and very little incentive for hardware makers to add support, is quickly adding controller support simply because it’s easier to do. This could be a model for Ableton, particularly as its user base and the diversity of hardware for it continue to expand.

If you’re on desktop Linux (yes, I’m sure someone is out there), the choice is easy: Bitwig is a terrific, fun piece of software with lots of rather nice effects and instruments. It’s fast and ready to go out of the box. And there isn’t much else native on Linux that can say that (Renoise springs to mind, but it has a very different workflow).

The problem is, if you’re not on Linux, I still can’t work out a reason I’d recommend Bitwig Studio over other tools. And, of course, the elephant in the room is Ableton Live. I reviewed Bitwig Studio for Keyboard, and found plenty to like. But the problem was, Bitwig Studio has competition, and as I wrote for that magazine, to me it comes a bit too close to Live to be able to differentiate itself:

While Bitwig Studio improves upon Live’s editing functionality, it replicates even some of Live’s shortcomings: There’s no surround audio support, nor any track comping facility…

Compared to Ableton Live Standard, Bitwig Studio’s offerings are fairly comparable. But at that price, Ableton gives you 11GB of sound content, more complete plug-in support, more extensive routing, more controller compatibility, and video support.

Even that doesn’t really sum up the whole story for me – controller compatibility, at least, is a narrowing advantage for Ableton because of Bitwig’s superb scripting facility. The problem is, if you want a change from Live, you want software that works differently (Cubase and the like for traditional DAWs, Maschine for drum machine workflows, Renoise for a tracker, and so on). If you want a Live-style workflow, you’re likely to choose Ableton Live.

You can read my whole review for Keyboard and see if you reach a different conclusion, though:

Bitwig Studio reviewed [Keyboard Magazine]

And as I’ve seen a handful of people start to use Bitwig, I’d be curious to hear from you: what was the deal maker that convinced you to switch? What is Bitwig offering you that rivals don’t?

The DAW market remains a competitive one, and it’s clear there’s always room for choice. Bitwig’s development pace at least continues moving forward. But I’ll keep repeating: I’d like to see this tool stray from its rivals.

And for me, the main thing is: once that review was done, I found myself returning to Ableton Live and finishing tracks, and not Bitwig Studio – even if I sometimes cursed Live’s shortcomings. Even if that is simply force of habit, it seems I’ll need more to kick that habit. And, unfortunately, you can’t judge software based on its forthcoming features.

The post Bitwig Studio 1.1 Adds Lots of Details; Can It Escape Ableton’s Shadow? appeared first on Create Digital Music.

Need a Tempo, Fast? Grab Your Web Browser

tempotap

It’s the little things. This just got added to my bookmarks; maybe it’ll be on yours.

http://www.tempotap.com

Press the spacebar repeatedly, and you get an accurate BPM count for a song. It’s actually useful to help learn to recognize BPM, as well, if you listen frequently when at your computer. (The trick used to be to look at the second hand of your wristwatch, as two ticks would be 120 bpm – but that requires an analog wristwatch.)

And yes, surely this will be one of the first native Apple Watch tools when its native SDK ships next year.

Thanks to the wondrous Esther Duijn, DJ friend, and her Facebook page.

The post Need a Tempo, Fast? Grab Your Web Browser appeared first on Create Digital Music.

AKAI Rhythm Wolf Review: Analog Doesn’t Always Mean Better

rhythmwolf

Dance music, it seems, has come full circle. Techno’s roots began with affordable oddball hardware, abused into new genres. And now, the appetite for cheap little boxes that make grooves is back.

But does “cheap” and “analog” always make for a winner? Well, not necessarily. But let’s find out why.

This is the AKAI Rhythm Wolf. When we first saw it, it was clear people would want it, because physically, visually, it has the things you’d want – even before you get to the accessible price. There are velocity-sensitive pads for each part, coupled (cleverly) with x0x-style buttons for simple 16-step patterns (which you can chain into 32-step pattern). There are the requisite controls for changing step length, and recording step sequences or performances. There’s ample I/O – proper MIDI in/out and thru (plus MIDI over USB), gate trigger in and out, and separate mono outputs for the synth and drums.

This is the body of a usable drum machine. It has all the controls you’d want, in a form factor people are bound to love.

AKAI Rhythm Wolf

Take the Rhythm Wolf out of the box, and until you plug it in, you’re still likely to be reasonably happy. The knob caps are somewhat unpleasant to tweak, but the build is otherwise great for a $200 piece of kit. It’s heavy and solid, with a metal case, and oddly has a bigger footprint than a new Elektron. The pads work just fine (even if AKAI now calls any pads “MPC”), and the triggers respond with a satisfying click. I’ll even excuse the strange plastic faux-wood end-caps; they don’t do any harm.

Of course, you’re probably not going to use a drum machine without plugging it in, and that’s where things suddenly go very wrong. There’s no gentle way to put this. It sounds not good.

The bass drum is fine. You can pitch it down and get something fairly workable. Personally, I feel it doesn’t compare with other offerings, and now thanks to KORG, that includes the dirt-cheap volca BEATS with its floor-rattling kick. But it’s usable.

The snare drum and percussion are, in my opinion … not fine. At best, they resemble sort of white noise generators; at worst, they’re flat and unusable. The hats and cymbals are even worse: noise-y, clang-y affairs that qualify as what they are, but only just.

The upshot is, even with the full range of knobs, I wasn’t able to find a set of variations I’d want to record. The sounds are perfectly fine for a $50 boutique kit, but not something that has an AKAI badge.

Processed with VSCOcam with j5 preset

Processed with VSCOcam with j5 preset

Then there’s the bass synth, which seems to crowd out an already-flawed design with something you don’t really want. The single-oscillator affair starts out sounding thin, so you think, perhaps, you’ll add the filter and it’ll improve. But the more you add resonance, the quieter the filter gets – the opposite of what you want. As mediocre as the drum parts are, the synth seems like wasted space.

Adding insult to injury, the so-called “Howl” knob just seems like overdriving gain: things get a bit louder and distorted and mostly you add copious amounts of background noise.

In fact, it’s all so shocking, I had a hard time telling anyone about it. If I described it without actually playing it, they didn’t believe me. I put off sitting down to write this review, partly because I knew I’d need to make some sound samples or video, and I really didn’t want to. (It doesn’t help that I’m my own editor, and no one from AKAI called wondering where the loaner was.)

But, then in my inbox, I got this video. And sure enough, it says exactly what I already said. I was relieved: maybe I’m not crazy.

The creator, space travel made easy, doesn’t mince words:

The other week I got myself an Akai Rhythm Wolf.
This was such a promising little drum machine but was one of the biggest let-downs I’ve ever had with hardware.
Watch the video to find out why I thought it so piss-poor.

http://www.spacetravelmadeeasy.com

He gets further than I did: I was already so unhappy with the synth that I didn’t bother to think about whether it would hold its tune across its multi-octave range. But it doesn’t. Yikes.

All of this is a shame, because recording patterns itself on the Rhythm Wolf is great fun, combining live performances on the pads with the x0x steps at the bottom. There actually isn’t another standalone drum machine I can think of for less than a grand that has something like this apparently-obvious combination of controls. The one kit that can compete is, ironically, Akai’s MPC. But they seem to have exited the standalone drum workstation market apart from entry-level stuff.

As it happens, you can use the pads and step triggers to transmit MIDI, both over USB and the onboard MIDI ports. But in another disadvantage to AKAI’s analog approach, you can’t use any of the 21 onboard parameter knobs to transmit MIDI – they’re analog only. So I don’t think you’d buy the Rhythm Wolf as a drum machine.

The box is fairly big and potentially hackable, so it’s conceivable someone would mod this into something, well, better. But they might just build their own drum machine instead.

Not everyone is unhappy with it. Presumably because it does make some noises, and the patterns are fun to play, and it has knobs, I’ve seen some happier YouTube users, and more power to you. Richard Devine used it as sort of a weird modular source for his Eurorack rig … but then, much of the actual sound is coming from elsewhere.

But I like AKAI – and drum machines, and you – too much to hold back on this review. You could spend your $200 on almost anything else. Any number of digital drum machines will sound radically better. AKAI’s current MPX or XR20 are just sample playback machines, for instance, but they have usable sounds and I think you’d get more use out of them. You might even find someone with an old MPC you could unload. Or you could buy a controller and work with software (again, even from AKAI).

And KORG’s volca BEATS will go, essentially, unchallenged. It’s not an even-sounding drum machine – sometimes it sounds frankly weird and lo-fi, but at least it does so in an interesting way. If you want drum machine hardware with lots of hands-on controls and you’re on a sub-$350 budget, consider either the MFB 522 or the volca. They’re both great, and the volca should fit just about any budget.

Making hardware is hard. It’s not enough to make something work on paper; it has to sound good, too. And the Rhythm Wolf just seems like a good idea that wasn’t fully executed, or fully finished.

The surreal thing here is the marketing; AKAI is promoting the “fierce 100% analog signal path” of the Rhythm Wolf, perhaps in the hopes that you aren’t actually listening.

I think the Rhythm Wolf is going to be a really big seller, whatever I say. But let’s at least, some of us, declare two things.

First, “analog” does not mean “sounds good.” That does an injustice to the hard, mysterious, and pleasurable work of making great-sounding analog gear, and the many possibilities of digital gear, and that both are routes to solving problems. (Wood is nice; you can also make something poor out of wood, or something beautiful out of plastic or bamboo.)

Second, “cheap” should not mean lowering our standards. Someone could serve me a junior McDonald’s Happy Meal and then punch me in the face. It’d be cheap, but I’d get very angry. This isn’t that bad, but at least feels like a Happy Meal without a free toy, and maybe you should really buy yourself a decent dinner, because you’re a grown-up.

If you want hands-on control and sounds on a budget , you now not only can choose an MFB

The Rhythm Wolf is worth talking about, because it could have been something great. In fact, I hope AKAI have a second go at this. I wish they had simply gone with digital sound sources combined with hands-on controls – really, there’s no reason that couldn’t work. But whatever they do, I’d love to see AKAI’s name on a drum machine worth endorsing. This just isn’t it, by a longshot.

Anyway, here’s another jam from space travel made easy:

And Richard Devine, going rather wild with this. (I agree with what he says about “analog” gear, just not necessarily that that has to mean analog signal path – it can mean digital instruments, and controllers for software, too, just so long as you’re using your hands.)

http://www.akaipro.com/product/rhythm-wolf

The post AKAI Rhythm Wolf Review: Analog Doesn’t Always Mean Better appeared first on Create Digital Music.

Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling

volcasample

The KORG volca sample is here – and it’s more open than we thought.

We’ve seen KORG’s affordable, compact, battery-powered volca formula applied to synths (BASS and KEYS) and a drum machine (BEATS). I’m especially partial to the booming kick of the BASS, the sound of the KEYS (which despite the name also works as a bass synth), and the clever touch sequencing interface.

Well, now having teased the newest addition to the family, we’re learning about the details of the KORG sample. It’s not a sampler per se – there’s no mic or audio input – but what KORG calls a “sample sequencer.”

We’ll have a unit in to test soon, but my impression is that sample sequencing isn’t a bad thing at all. Sequencing has always been a strong suit for the volca, and here, it’s the main story. Every parameter of a sample is ready to step sequence, from the way the sample is sliced, to its playback speed and amplitude envelope, to pitch.

Additional features:

  • Reverse samples
  • Per-part reverb (ooh)
  • Active step / step jump (for editing steps)
  • “A frequency isolator, which has become a powerful tool in the creation of numerous electronic genres.” Or, um, to make that understandable, there are treble and bass analog filters.
  • Swing
  • Song mode – 16 patterns x 6 songs

That leaves only how to get samples into the volca sample, beyond the 100 samples already built in.

It has exactly the same complement of jacks on the top as the synth and drum machine volcas – sync signal in and out, MIDI in, headphone out, and … nothing else. So, instead, KORG wants you to use an iOS handheld to record samples first. You transfer them into the unit via one of the sync jacks. Initially, that came as a bit of a shock, and judging by comments, at least some of you readers didn’t like the decision much. Frankly, looking at the unit, it looks like like there just wasn’t room; KORG dedicated the jacks to their usual function and used up the whole panel on sampling and sequencing controls.

volcasampletop

Since then, though, we’ve had two developments that might get your interest back.

First, we’ve seen the iOS app, and it looks really cool. Brace yourself for cute video of designer Tatsuya Takahashi’s kid!

Okay, so the transfer process is a bit of a pain, but cutting samples on the iPhone is convenient, since you can see what you’re doing. It also solves the problem of needing to have a mic handy.

Here’s the surprise second development: KORG is releasing a free SDK for talking to the volca sample:

http://korginc.github.io/volcasample/

Basically, the volca sample’s trick is to encode binary data as audio signal, in the same way dial-up modems once did. (The technique is QAM – quadrature amplitude modulation – in case you’re interested.) The SDK helps you encode that data yourself. The software gives you several features:

1. You can encode audio samples to transfer – individually or as an entire 16-step sequence.
2. You can manage samples on the sample (delete them individually or delete all of them).

The SDK and library is written in C, but that means it could be used just about anywhere. I expect an Android app from a volca lover will be one of the first applications. It doesn’t have to stop there, though. You could build interesting sample-generating desktop apps – the KORG site suggests possibilities:

“Auto-slice a song to generate a sample set?
Turn photos of patterns into sequences?
algorithmic sample music generator?
generate random sequence from quantum effects?”

connect

And, oh yeah, you could even make your own sampling hardware with the library, though… if you’re savvy enough to do that, you might just go ahead and make your own sampling hardware.

Speaking of your own hardware, unfortunately there isn’t any decode capability, though I don’t see why someone couldn’t make their own. (QAM decoding is already something that’s widely available.)

What you get in the SDK source:

  • The “syro” library, the bit that does the encoding.
  • A project sample with examples, ready to build with gcc, clang, and Visual Studio 2010 or later.
  • Definitions for patterns.
  • Factory preset / reset data.

So, if someone wants to make a bare-bones sample project for the iOS SDK or Android SDK, for instance, let us know!

The whole project is covered under a BSD license, so highly permissive. Have a look, developers (or, um, Android users who aren’t developers, keep your fingers crossed, start buying beers and nice Christmas presents for your Android dev friends, whatever):

http://korginc.github.io/volcasample/documentation.html

https://github.com/korginc/volcasample

volca sample is shipping it seems in small quantities, but isn’t yet widely available. Stay tuned.

http://www.korg.com/us/products/dj/volca_sample/

Specs:

This is the heart of this beast – sequencing, sequencing, sequencing of … everything, actually, as the list below is identical to the list of sample parameters.

Parameters that can be used with Motion Sequence:
・ Start Point (Playback start location)
・ Length (Playback length)
・ Hi Cut (Cutoff frequency)
・ Speed (Playback speed)
・ Pitch EG Int (Pitch EG depth)
・ Pitch EG Attack (Pitch EG attack time)
・ Pitch EG Decay (Pitch EG Decay time)
・ Amp Level
・ Pan
・ Amp EG Attack (Amp EG Attack time)
・ Amp EG Decay (Amp EG Decay time)

And full specs:

100 sample slots (you can overwrite these)
4 MB (65 seconds) sample memory total (of course, divided across those 100 slots)
31.25 kHZ, 16 bit
Digital Reverb
Analogue Isolator
10 parts, 16 steps
Sync In (3.5mm monaural mini jack, Maximum input level: 20V)
Sync Out(3.5mm monaural mini jack, Maximum Output level: 5V)
MIDI IN
10 hour estimated battery life on 6 AA batteries or optional 9V AC adapter
372 g / 13.12 oz. (Excluding batteries)
193 × 115 ×45 mm / 7.60” x 4.53” x 1.77”

The post Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling appeared first on Create Digital Music.

Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling

volcasample

The KORG volca sample is here – and it’s more open than we thought.

We’ve seen KORG’s affordable, compact, battery-powered volca formula applied to synths (BASS and KEYS) and a drum machine (BEATS). I’m especially partial to the booming kick of the BASS, the sound of the KEYS (which despite the name also works as a bass synth), and the clever touch sequencing interface.

Well, now having teased the newest addition to the family, we’re learning about the details of the KORG sample. It’s not a sampler per se – there’s no mic or audio input – but what KORG calls a “sample sequencer.”

We’ll have a unit in to test soon, but my impression is that sample sequencing isn’t a bad thing at all. Sequencing has always been a strong suit for the volca, and here, it’s the main story. Every parameter of a sample is ready to step sequence, from the way the sample is sliced, to its playback speed and amplitude envelope, to pitch.

Additional features:

  • Reverse samples
  • Per-part reverb (ooh)
  • Active step / step jump (for editing steps)
  • “A frequency isolator, which has become a powerful tool in the creation of numerous electronic genres.” Or, um, to make that understandable, there are treble and bass analog filters.
  • Swing
  • Song mode – 16 patterns x 6 songs

That leaves only how to get samples into the volca sample, beyond the 100 samples already built in.

It has exactly the same complement of jacks on the top as the synth and drum machine volcas – sync signal in and out, MIDI in, headphone out, and … nothing else. So, instead, KORG wants you to use an iOS handheld to record samples first. You transfer them into the unit via one of the sync jacks. Initially, that came as a bit of a shock, and judging by comments, at least some of you readers didn’t like the decision much. Frankly, looking at the unit, it looks like like there just wasn’t room; KORG dedicated the jacks to their usual function and used up the whole panel on sampling and sequencing controls.

volcasampletop

Since then, though, we’ve had two developments that might get your interest back.

First, we’ve seen the iOS app, and it looks really cool. Brace yourself for cute video of designer Tatsuya Takahashi’s kid!

Okay, so the transfer process is a bit of a pain, but cutting samples on the iPhone is convenient, since you can see what you’re doing. It also solves the problem of needing to have a mic handy.

Here’s the surprise second development: KORG is releasing a free SDK for talking to the volca sample:

http://korginc.github.io/volcasample/

Basically, the volca sample’s trick is to encode binary data as audio signal, in the same way dial-up modems once did. (The technique is QAM – quadrature amplitude modulation – in case you’re interested.) The SDK helps you encode that data yourself. The software gives you several features:

1. You can encode audio samples to transfer – individually or as an entire 16-step sequence.
2. You can manage samples on the sample (delete them individually or delete all of them).

The SDK and library is written in C, but that means it could be used just about anywhere. I expect an Android app from a volca lover will be one of the first applications. It doesn’t have to stop there, though. You could build interesting sample-generating desktop apps – the KORG site suggests possibilities:

“Auto-slice a song to generate a sample set?
Turn photos of patterns into sequences?
algorithmic sample music generator?
generate random sequence from quantum effects?”

connect

And, oh yeah, you could even make your own sampling hardware with the library, though… if you’re savvy enough to do that, you might just go ahead and make your own sampling hardware.

Speaking of your own hardware, unfortunately there isn’t any decode capability, though I don’t see why someone couldn’t make their own. (QAM decoding is already something that’s widely available.)

What you get in the SDK source:

  • The “syro” library, the bit that does the encoding.
  • A project sample with examples, ready to build with gcc, clang, and Visual Studio 2010 or later.
  • Definitions for patterns.
  • Factory preset / reset data.

So, if someone wants to make a bare-bones sample project for the iOS SDK or Android SDK, for instance, let us know!

The whole project is covered under a BSD license, so highly permissive. Have a look, developers (or, um, Android users who aren’t developers, keep your fingers crossed, start buying beers and nice Christmas presents for your Android dev friends, whatever):

http://korginc.github.io/volcasample/documentation.html

https://github.com/korginc/volcasample

volca sample is shipping it seems in small quantities, but isn’t yet widely available. Stay tuned.

http://www.korg.com/us/products/dj/volca_sample/

Specs:

This is the heart of this beast – sequencing, sequencing, sequencing of … everything, actually, as the list below is identical to the list of sample parameters.

Parameters that can be used with Motion Sequence:
・ Start Point (Playback start location)
・ Length (Playback length)
・ Hi Cut (Cutoff frequency)
・ Speed (Playback speed)
・ Pitch EG Int (Pitch EG depth)
・ Pitch EG Attack (Pitch EG attack time)
・ Pitch EG Decay (Pitch EG Decay time)
・ Amp Level
・ Pan
・ Amp EG Attack (Amp EG Attack time)
・ Amp EG Decay (Amp EG Decay time)

And full specs:

100 sample slots (you can overwrite these)
4 MB (65 seconds) sample memory total (of course, divided across those 100 slots)
31.25 kHZ, 16 bit
Digital Reverb
Analogue Isolator
10 parts, 16 steps
Sync In (3.5mm monaural mini jack, Maximum input level: 20V)
Sync Out(3.5mm monaural mini jack, Maximum Output level: 5V)
MIDI IN
10 hour estimated battery life on 6 AA batteries or optional 9V AC adapter
372 g / 13.12 oz. (Excluding batteries)
193 × 115 ×45 mm / 7.60” x 4.53” x 1.77”

The post Meet KORG’s New Sample Sequencing volca – And its SDK for Sampling appeared first on Create Digital Music.

Hack Biology, Body, and Music: Open Call for MusicMakers Hacklab

hacklab1

For the past two winters, CDM has joined with Berlin’s CTM Festival to invite musical participants to grow beyond themselves. Working in freshly-composed collaborations, they’ve built new performances in a matter of days, then presented them to the world – as of last year, in a public, live show.

This year, they will work even more deeply inside themselves, finding the interfaces between body and music, biology and sound.

And that means we’re inviting everyone from choreographers to neuroscientists to apply, as much as musicians and code makers. Playing with the CTM theme of “Un Tune,” the project will this year encourage participants to imagine biology as sonic system, sound in its bodily effects, and otherwise connect embodiment to physical reality.

Joining me is Baja California-born Leslie Garcia, a terrific sound artist and maker who has already gone from participating in last year’s lab to organizing her own in her native Mexico. You can glimpse her below looking like a space sorceress of some kind, and hear the collaborative work she made last winter.

The 2014 hacklab's output, all wired up for the performance. Photo: CTM Festival.

The 2014 hacklab’s output, all wired up for the performance. Photo: CTM Festival.

We don’t know what people will propose or what meaning they will find out of that theme, but it might include stuff like this:

  • Human sensors (Galvanic Skin Response, EKG, EEG, eye movement, blood pressure, resporation and mechanomyogram or MMG)
  • Biofeedback systems
  • Movement sensors
  • Electrical stimulation
  • Aural and optical stimulation
  • Data sonification
  • Novel physical controllers
  • Dance performance, breathing techniques, and other physical practices

Or as CTM puts it, they will navigate “the spectrum between bio-acoustics, field recordings, ambient, flicker, brainwave entrainment, binaural beats, biofeedback, psychoacoustics, neo-psychedelia, hypnotic repetition, noise, and sub-bass vibrations” to both address and disturb the body.

And what I do know is, the most effective work will come out of new collaborations, new unexpected partnerships across fields – because that’s been my consistent experience with hacklabs past, as with the spatial sound project we did last month in Amsterdam or the CTM collaboration 2014, which I’ll document a little later this month.

Leslie is a great example of that. She initiated a collaboration with Stefano Testa at the hacklab in January/February. They produced, in a few short days, “Symphony for Small Machine” – exactly the sort of irreverent project we hoped would spring out of the week. Have a look:

http://lessnullvoid.cc/content/2014/02/symphony-for-small-machine-2/

And check out lots of other work talking to plants and bacteria and harnessing free and open source software:

http://lessnullvoid.cc/content/projects/

LeslieGarcia2

Apply to this year’s open call:

http://www.ctm-festival.de/festival-2015/transfer/musicmakers-hacklab/

December 12 is the deadline – yep, I know what a big part of my Christmas season will be about this year, and I couldn’t be more pleased.

And hope to see some of you in Berlin.

Follow the MusicMakers project on Facebook – I’ve been lax about updating this page, but will do so again!

The post Hack Biology, Body, and Music: Open Call for MusicMakers Hacklab appeared first on Create Digital Music.

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

The impressive, futuristic physical form of the 4DSOUND system. Photo: George Schroll.

The impressive, futuristic physical form of the 4DSOUND system. Photo: George Schroll.

You can’t really hear the results of the Spatial Audio Hacklab sitting at your computer – by definition, you had to be there to take in the experience of sounds projected in space. But you’ll probably feel the enthusiasm and imagination of its participants.

And that’s why it’s a pleasure to share the video documentation, produced for 4DSOUND by a team from FIBER – the Dutch audiovisual events and art platform – at Amsterdam Dance Event last month. In unleashing a diverse group of artist-experimenters on 4DSOUND’s unique speaker installation, we got a chance to create a sonic playground, a laboratory experiment in what people could do. It’s tough to overstate just how much those participants brought to the table – or just how little time they had. Actually working on the system was measured in minutes, forcing artists to improvise quickly with reality television levels of pressure. (Only, unlike TV show challenges, everyone kept their nerves and wits.)

4DSOUND Spatial Sound Hack Lab at ADE 2014 from FIBER on Vimeo.

To get through it, these artists focused on collaboration, finding ways of connecting essential skills. In the days and weeks leading up to Amsterdam, many of them fired missives back and forth wondering how best to exploit the spatial sound system. They then worked intensively to devise something they could try quickly, forming spontaneous teams to combine resources. They did in minutes what resident artists had done in days. With input from Nicholas Bougaïeff from Liine and a whole lot of guidance and assistance from the entire 4DSOUND team, in particular founder Paul Oomen, gathered hacekers managed to get a whole lot up and running. No project went silent; with tweaks, everything worked.

This wasn’t merely a show of coding prowess or engineering. Each project found some way to involve musical practice and sound, each was a “jam” as well as “hack.” That’s something different from the typical shape of hack days; these projects weren’t just demos. They were given a voice — sometimes literally singing, rather beautifully.

It was what we hoped for, and more. The Spatial Audio Hacklab was made in cooperation between myself and CDM, FIBER, the 4DSOUND team who have built the system and its software, and Liine (makers of the Lemur app), with support from Ableton (and their talented developer relations liason, Céline Dedaj). It followed a week of intensive artist projects on the 4D from Max Cooper, Vladislav Delay, Stimming, and even myself with Robert Lippok (on a bill with raster-noton labelmates Grischa Lichtenberger, Senking, and Frank Bretschneider). But it was also a kind of contrast to those performances and their accompanying master classes, one where any “what if?” question was game.

If days of programming hadn’t already convinced you, by the rapid-fire hacklab it was clear: a spatial sound system can be more than just a clever effect. It can feel like a venue, as unlimited in possibilities as the stage of a concert hall. It’s an empty box to be filled, in the best possible way.

Photo (during the raster-noton showcase) by Fanni Fazakas.

Photo (during the raster-noton showcase) by Fanni Fazakas.

Hearing fully-formed improvised music was especially gratifying. But for me, perhaps the most promising result was the Processing-powered game of Pong whipped up by a coder team, because it validated the accuracy of listeners’ perception. Using sensors that had previously tracked singers, it involved players scurrying around to bang a sonic ball back and forth. (That, too, makes it a nice three-dimensional counterpart of InvisiBall, a similar sonic game by Hakan Lidbo – and, tellingly, that game has been played even by blind people. Thinking out of the box can mean inventing things that aren’t limited to the usual population and audience.)

Developers from Ableton have hacked a game of pong using spatial sound. #4dsound #ade14

A video posted by Peter Kirn (@pkirn) on

Participant Will Copps also shared some of his thoughts after the experience, alongside lots of positive feedback we got from the hacklabbers:

The documentary does a good job of capturing the event, but is also incredibly impressive in capturing the various hack lab ideas and distilling them into a thirteen-minute piece… I’m very impressed. I know we could have spent more than thirteen minutes talking about each individual project.

As for additional thoughts, I’d stress that there was a clear benefit to just being there and hearing the system. While many of the ideas we had were specifically for the 4D system, hearing sounds through it challenges us to incorporate as many of the benefits of spatialization as we can into our current practices. The most obvious takeaway for me, at least to implement imminently, is exploring the possibilities of binaural recording. I may not be able to easily create immersion by placing firework recordings all around the listener like we did at 4D, but now it just seems foolish not to explore ways I can try.

It’s incredible when exposure to a technological development like this alters your perception of your practice in such a specific way. We’ve been fortunate to take lifetimes of those developments for granted in our art: the ability to record, to mix in stereo, to capture colors in video. I recognize that this distinct spatialization of sound is still a ways away from being as widely implemented as the developments I just mentioned. But to be one of the first to experience something like it is unreal. Comparing 4D to those developments may sound like (and may be) hyperbole… but I don’t think any of us knows for sure. And that’s perhaps what is most exciting.

Ana Laura Rincon, aka the DJ Hyperaktivist, echoed similar sentiments:

Some experiences in life must be lived so you really understand; in this case, the 4DSOUND is an experience that must be heard so you can have a real idea of the possibilities the system offers. And so we did in the Hacklab. The 4DSOUND is a very forward-thinking idea that allows you to control sound and its dimensions in the space, giving you the possibility of creating natural environments and also of making music with the space. The system gives you the possibility of listening to sound without being confined to a speaker or a stereo system; you hear sound as it is generated outside in everyday life. Having had the opportunity to participate in the HackLab, sharing ideas with amazing musicians, developers, and just great people, and being able to use the system and really understand how it works gave me a really good insight of the vast possibilities it has, and how the exploration of these is just starting. There is a new window a very big door just open for the future exploration of music, sound and how it can be experience and perceived — look out.

My feeling was, looking through the applications, that something would happen just by getting people together in the same room. Spatial audio is an nascent but evolving field, with a scattering of different systems. I’ll talk more about that soon, but the short version: they’re all rather different, from wave field synthesis to Dolby Atmos to the unique sound and physical presence of 4DSOUND. Amsterdam was a chance to reach human critical mass thinking about the problems, accelerating progress by connect people. It brought into one arena some of the most passionate researchers and artists, from those who have done doctoral dissertations on the topic to those curious to explore spatialization for the first time.

And that alone was transformative – as if everyone began dreaming in color for the first time, and then we all got to share the dream.

It’ll be terrific to see what’s next.

In the meanwhile, we can reveal the next playground: we’ll be in Berlin at CTM Festival making Tuning Machines in our third hacklab collaboration with that event in as many years, no doubt with a new group of collaborative spirits.

You can also have a listen to some audio impressions of the event, recorded in binaural sound.

Recording by Sero (one of the participants)
https://m.soundcloud.com/bassik

More on the film:

fiber-space.nl
codedmatters.nl
Credits:
Production – Jessica Dreu
Camera / editing – Tanja Busking
Interviews – Dayna Casey
FIBER Facebook: facebook.com/pages/Fiber-Festival/169577819730388
Music:
Frank Bretschneider – Phased Out, Oscillation, Funkalogic, Monoplex, Multiplex, Panback, Blue, Prussian
Moskitoo – Wham & Whammy (Frank Bretschneider remix)
Robert Lippok & Peter Kirn – Live performance at 4DSOUND during ADE 16-10-2014
All other sound was recorded during the Hacklab itself.
Coded Matter(s) is supported by the Creative Industry Fund NL

The post Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND appeared first on Create Digital Music.

Lemur is Now on Android, Supports Cabled Connections; You Want This Touch App

lemurlemur

Before there even was an iPad or iPhone, there was Lemur. The touch-based controller device was theoretically the first-ever consumer multi-touch hardware. Early adopters connected the pricey smart display via Ethernet to a computer, and wowed friends with flying faders and bouncing balls and new ways of doing everything from manipulating spatial audio to playing instruments.

Then, the iPad arrived, and Lemur had a new life as an iOS-only app. For many of us, it’s alone reason enough to own an Apple tablet.

But Apple tablets are pricey. Android tablets are cheap. And Android tablets are increasingly available in more sizes. So, maybe you want to run Lemur on Android. Maybe it’s your only tablet. Or maybe you’re just worried that now your live performance set depends on an iPad mini, and if it dies, you’re out hundreds more – so Android is an appealing backup.

Well, now, Lemur has come to Android. It wasn’t easy; it required lots of additional testing because of the variety of devices out there and weird peculiarities of making Android development work properly. (Disclosure: I was one of Lemur’s testers, and was gratified when it suddenly started working on my Nexus 7, which is a fairly excellent low-cost device.)

But now it’s here. And it’s fantastic. Nick from Liine came to our monthly mobile music app meetup in Berlin and showed us just how easy it is to code your own custom objects using the canvas – more on that soon. But combine that with a stable app for hosting your own creations, and Lemur is simply indispensable. It’s US$24.99 on the Google Play store.

Oh, and one more thing: wires.

Yes, sure, part of the appeal of tablets is wireless control. That allows you to walk around a performance venue, for instance, whilst controlling sounds and mixing. But in live situations, it sure is nice to avoid wifi connection problems and depend on a conventional wire. On both Android and iOS, this requires a special driver – at least if you want to connect directly via USB. But there’s already a free and open source Mac driver for Android, and it works really nicely with Lemur:

http://joshuawise.com/horndis

I am absolutely going to start carrying both my Nexus 7 and my iPad mini – I now never have to worry that one tablet will die or the iPad WiFi will decide to stop working int he middle of a show. I might even put them in different bags. You know – redundancy. And for Android lovers, this is great news. (They’ve been getting a handful of excellent apps lately, which, while nowhere near the iOS ecosystem, still mean you can get a lot of use out of an Android tablet. But that’s a story for another day.)

More on Lemur:

Lemur

And grab it from the Google Play store:

Lemur @ Google Play

The post Lemur is Now on Android, Supports Cabled Connections; You Want This Touch App appeared first on Create Digital Music.

DIY Tool Max 7 Arrives; Here Are The Best New Features

max7collage

Being “software about nothing” isn’t easy.

Max has for years been a favored choice of musicians and artists wanting to make their own tools for their work. But it’s been on a journey over more recent years to make that environment ever more accessible to a wider audience of people.

The aim: for beginners and advanced users alike, work faster, producing tools that work better. Okay, those are easy goals to set – a bit like all of us declaring we’re going to “get in better shape” in a few weeks from now on New Year’s Eve. But Max 7 somehow brings together a range of plotlines from years of development and evolution.

This is very quickly looking like the visual toolkit for media that Max has always longed to be.

What’s new?

Too long/ didn’t read? Here’s the quick version:

  • Patch faster and prettier with a new UI, styles, new browser, and loads of shortcuts.
  • Elastic, pitch/tempo-independent audio everywhere, syncable everywhere.
  • Loads of pitch correction and harmonization and pitch effects, straight out of the box.
  • Use Max for Live patches directly – even without a copy of Ableton.
  • Use video and audio media directly, without having to make your own player.
  • Use VST, AU plug-ins seamlessly, plus Max for Live patches – even without a copy of Ableton.
  • Make video and 3D patches more quickly, with physics and easy playback, all taking advantage of hardware acceleration on your GPU.

And what’s new in detail, as well as why it matters:

There’s a new UI. You’ll notice this first – gray toolbars ring the window. Somehow, they do so without looking drab. Objects are on the top, where they’ve been since the beginning, but now media files (like audio) are on the left, view options are on the bottom, and the inspector and help and other contextual information is on the right. (That’s all customizable, but so far everyone I’ve talked to has been happy with the default.) Max also recalls your work environment, so you can pick up where you left off – and it recovers from crashes, too.

My favorite feature: you can theme UIs with consistent styles.

You can browse and collect files easily. Clearly inspired by browsers like the one in Ableton Live, there’s a file browser for quick access to your content, and you can collect files in folders and the like from anywhere and drop them in. This isn’t the first version of Max with such a feature, but it’s the first one that makes managing files effortless. And you can tag and search in detail.

Reuse your patches as snippets. Got a set of objects you reuse a lot – like, for instance, one that plays back audio or manages a list? Select it, save it as a snippet, and then find it in that new browser. There are lots of examples snippets, too, interestingly pulled directly from the Max help documentation – so no more will you need to head to the help documentation and recreate what’s there.

Elastic audio – manipulate audio in pitch and time, separately. The Max and Pd family has been able to manipulate pitch and audio independently as long as it has had audio capabilities – provided you do the patching to make that work yourself. What’s changed is that it’s built in. Audio objects now support these features out of the box without patching. There’s a new higher-quality “stretch~” object that sounds the way we expect software to sound out of the box. And all of this interacts with a global transport.

This is of course useful to those making Max for Live creations for Ableton Live, as it means you can build in audio manipulation and everything will sync to a Live set. But it means something else: you might wind up building your own performance tool without even touching Ableton Live.

There’s a bunch of modular stuff included. Can’t afford a big rack of modulars? No room for hardware and cables? The Beap modules are now included, which let you combine software modules in much the same way you would physical modules.

Then if you do use hardware modulars, you can output the same signal via a compatible audio interface and control that.

Use media. Media files now have their own players, with clip triggering and playlist creation. Making a VJ tool, for instance, should now be stunningly easy, and working with audio playback (in combination with elastic audio) ridiculously straightforward.

Use plug-ins. VST, AU, straight out of the box, with the ability to customize which parameters you see. Max is now a powerful plug-in host – made more so by its ability to save and recall patcher and plug-in parameters in “Snapshots.”

Use Max for Live devices directly. No copy-and-paste – you can now open Max for Live patches even without owning a copy of Ableton Live. That’s another reason patchers may wind up just building their own performance environment.

To get you started, a bunch of classic Max for Live devices are included (like Pluggo), plus a whole mess of pitch shifters and players, vocoders, and elastic audio instrument/effects.

AutoTune the patch. retune~ is an intonation / harmonization object – what’s known colloquially to the rest of the world as “AutoTuning” (apologies to Antares for abusing their trademark). T-Pain, Max/MSP edition? There’s also a correction/harmonization device for Max for Live.

Use the Web. You can now embed the open source Chromium browser inside your patches (the WebKit-based embedded browser engine that’s used in Google Chrome), and use data from the Internet.

This is the version of Max visual users have been waiting for. I’ve saved some of the best for last. Jitter has long been the somewhat ugly stepsister of the audio stuff in Max, and it’s lately been showing its age. No more. Max 7 looks like it’s worth the wait. This is at last a version of Max that’s fully hardware-accelerated for video and easy to use.

  • Video playback and capture are rendered directly to 3D hardware, rather than getting bottlenecked on the CPU – and you can decode on your GPU. (Mac only for now, but Windows support is coming.)
  • A single jit.world object consolidates the stuff you need to output to the display – complete with physics and OpenGL 3D/texture support.
  • Video input and output syncs automatically, rather than requiring separate metro objects.
  • Render shadows.
  • Make your own objects in gen.
  • Use a massively-enhanced collection of live video modules (which interface with those modular objects, too).

Patch faster. Keyboard shortcuts quickly create and connect objects (at last!), you can zoom around the cursor with the ‘z’ key, and quickly apply transforms to patches.

No more runtime. The unlicensed version of Max opens patches and edits them; it just doesn’t save.

Available now. 30-day free trial, upgrades are US$149, and you can now subscribe for $9.99 a month.

The only thing you’ll be waiting on for a little while is, unfortunately, full Ableton Live support; no timeline on when Ableton users will see Max 7. I know they’ll want it with all those elastic audio features.

Videos:

The competition?

I don’t think there’s any doubt: Max is now the patching environment to beat, by far. Nothing else is anywhere close to this broad, and now nothing comes anywhere close to being this usable.

That doesn’t mean I think other environments should try to be Max.

For most music users, the big rivals remain Native Instruments’ Reaktor and Max’s own cousin Pd, and there’s still room for them.

Reaktor may be a lot narrower than Max, but it’s also still a terrific tool if you just want to build an instrument or effect quickly. It also has some rather nice granular tools. It is looking long in the tooth, though, and I’d like to see Native Instruments treat this release more seriously. It’s hard to put work and time into Reaktor patching knowing that Native Instruments won’t provide you any sort of runtime to share your work – anyone wanting to use it has to go buy Reaktor or Komplete. And good as those instrument/effects tools are, Reaktor’s media management for samples is appallingly bad. In fact, until Reaktor fixes that area, it’s hard not to imagine some people jumping ship for Max – especially with built-in plug-in support.

Pure Data (Pd) is a different animal. Max 7 is another reminder of why we need Pd. Even though both originally leapt from the mind of Miller Puckette, they’ve evolved into radically distinct beasts. It’s a bit like coming back to Galapagos Island after twenty years and finding one of your turtles has evolved into a space dragon while the other one became a washer/dryer. It isn’t just that Pd is free and open source software, it’s that it’s engineered in such a way that makes that an advantage. Pd is tiny, even as Max is huge. That makes Pd well-suited to embedding in apps and games, mobile and desktop, software and hardware, when Max can do nothing of the sort. Max 7 is also, however, a painful reminder that Pd needs a new UI. Maybe it should also be minimal (a Web-powered UI would sure make sense). But the time is now. (And a desktop Pd really wants plug-in hosting, but that’s another story.) My dream at the moment would certainly be that each becomes effortless enough to use that I can spend some proper quality time in both.

There are, of course, many other ways to solve problems in code and patchers, so I won’t go into all of them. But there, we live in a wonderful time for DIY creative tools. It doesn’t have to be a time drain, it doesn’t have to painful.

It turns out that software about nothing can be for more or less anyone.

We’ll look more in detail at Max soon; I’ve got some interviews lined up for when I’m back around Berlin.

Happy patching, people.

https://cycling74.com/max7/

The post DIY Tool Max 7 Arrives; Here Are The Best New Features appeared first on Create Digital Music.