Automated techno: Eternal Flow generates dance music for you

Techno, without all those pesky human producers? Petr Serkin’s Eternal Flow is a generative radio station – and even a portable device – able to make endless techno and deep house variations automatically.

You can run a simple version of Eternal Flow right in your browser:

https://eternal-flow.ru/

Recorded sessions are available on a SoundCloud account, as well:

But maybe the most interesting way to run this is in a self-contained portable device. It’s like a never-ending iPod of … well, kind of generic-sounding techno and deep house, depending on mode. Here’s a look at how it works; there’s no voiceover, but you can turn on subtitles for additional explanation:

There are real-world applications here: apart from interesting live performance scenarios, think workout dance music that follows you as you run, for example.

I talked to Moscow-based artist Petr about how this works. (And yeah, he has his own deep house-tinged record label, too.)

“I used to make deep and techno for a long period of time,” he tells us, “so I have some production patterns.” Basically, take those existing patterns, add some randomization, and instead of linear playback, you get material generated over a longer duration with additional variation.

There was more work involved, too. While the first version used one-shot snippets, “later I coded my own synth engine,” Petr tells us. That means the synthesized sounds save on sample space in the mobile version.

It’s important to note this isn’t machine learning – it’s good, old-fashioned generative music. And in fact this is something you could apply to your own work: instead of just keeping loads and loads of fixed patterns for a live set, you can use randomization and other rules to create more variation on the fly, freeing you up to play other parts live or make your recorded music less repetitive.

And this also points to a simple fact: machine learning doesn’t always generate the best results. We’ve had generative music algorithms for many years, which simply produce results based on simple rules. Laurie Spiegel’s ground-breaking Magic Mouse, considered by many to be the first-ever software synth, worked on this concept. So, too, did the Brian Eno – Peter Chilvers “Bloom,” which applied the same notion to ambient generative software and became the first major generative/never-ending iPhone music app.

By contrast, the death metal radio station I talked about last week works well partly because its results sound so raunchy and chaotic. But it doesn’t necessarily suit dance music as well. Just because neural network-based machine learning algorithms are in vogue right now doesn’t mean they will generate convincing musical results.

I suspect that generative music will freely mix these approaches, particularly as developers become more familiar with them.

But from the perspective of a human composer, this is an interesting exercise not necessarily because it puts yourself out of a job, but that it helps you to experiment with thinking about the structures and rules of your own musical ideas.

And, hey, if you’re tired of having to stay in the studio or DJ booth and not get to dance, this could solve that, too.

More:

http://eternal-flow.ru/

Now ‘AI’ takes on writing death metal, country music hits, more

Thanks to new media artist and researcher Helena Nikonole for the tip!

The post Automated techno: Eternal Flow generates dance music for you appeared first on CDM Create Digital Music.

Max TV: go inside Max 8’s wonders with these videos

Max 8 – and by extension the latest Max for Live – offers some serious powers to build your own sonic and visual stuff. So let’s tune in some videos to learn more.

The major revolution in Max 8 – and a reason to look again at Max even if you’ve lapsed for some years – is really MC. It’s “multichannel,” so it has significance in things like multichannel speaker arrays and spatial audio. But even that doesn’t do it justice. By transforming the architecture of how Max treats multiple, well, things, you get a freedom in sketching new sonic and instrumental ideas that’s unprecedented in almost any environment. (SuperCollider’s bus and instance system is capable of some feats, for example, but it isn’t as broad or intuitive as this.)

The best way to have a look at that is via a video from Ableton Loop, where the creators of the tech talk through how it works and why it’s significant.

Description [via C74’s blog]:

In this presentation, Cycling ’74’s CEO and founder David Zicarelli and Content Specialist Tom Hall introduce us to MC – a new multi-channel audio programming system in Max 8.

MC unlocks immense sonic complexity with simple patching. David and Tom demonstrate techniques for generating rich and interesting soundscapes that they discovered during MC’s development. The video presentation touches on the psychoacoustics behind our recognition of multiple sources in an audio stream, and demonstrates how to use these insights in both musical and sound design work.

The patches aren’t all ready for download (hmm, some cleanup work being done?), but watch this space.

If that’s got you in the learning mood, there are now a number of great video tutorials up for Max 8 to get you started. (That said, I also recommend the newly expanded documentation in Max 8 for more at-your-own-pace learning, though this is nice for some feature highlights.)

dude837 has an aptly-titled “delicious” tutorial series covering both musical and visual techniques – and the dude abides, skipping directly to the coolest sound stuff and best eye candy.

Yes to all of these:

There’s a more step-by-step set of tutorials by dearjohnreed (including the basics of installation, so really hand-holding from step one):

For developers, the best thing about Max 8 is likely the new Node features. And this means the possibility of wiring musical inventions into the Internet as well as applying some JavaScript and Node.js chops to anything else you want to build. Our friends at C74 have the hook-up on that:

Suffice to say that also could mean some interesting creations running inside Ableton Live.

It’s not a tutorial, but on the visual side, Vizzie is also a major breakthrough in the software:

That’s a lot of looking at screens, so let’s close out with some musical inspiration – and a reminder of why doing this learning can pay off later. Here’s Second Woman, favorite of mine, at LA’s excellent Bl__K Noise series:

The post Max TV: go inside Max 8’s wonders with these videos appeared first on CDM Create Digital Music.

Exploring machine learning for music, live: Gamma_LAB AI

AI in music is as big a buzzword as in other fields. So now’s the time to put it to the test – to reconnect to history, human practice, and context, and see what holds up. That’s the goal of the Gamma_LAB AI in St. Petersburg next month. An open call is running now.

Machine learning for AI has trended so fast that there are disconnects between genres and specializations. Mathematicians or coders may get going on ideas without checking whether they work with musicians or composers or musicologists – and the other way around.

I’m excited to join as one of the hosts with Gamma_LAB AI partly because it brings together all those possible disciplines, puts international participants in an intensive laboratory, and then shares the results in one of the summer’s biggest festivals for new electronic music and media. We’ll make some of those connections because those people will finally be together in one room, and eventually on one live stage. That investigation can be critical, skeptical, and can diverge from clichéd techniques – the environment is wide open and packed with skills from an array of disciplines.

Natalia Fuchs, co-producer of GAMMA Festival, founder of ARTYPICAL and media art historian, is curating Gamma_LAB AI. The lab will run in May in St. Petersburg, with an open call due this Monday April 8 (hurry!), and then there will be a full AI-stage as a part of Gamma Festival.

Image: Helena Nikonole.

Invited participants will delve into three genres – baroque, jazz, and techno. The idea is not just a bunch of mangled generative compositions, but a broad look at how machine learning could analyze deep international archives of material in these fields, and how the work might be used creatively as an instrument or improviser. We expect participants with backgrounds in musicianship and composition as well as in coding, mathematics, and engineering, and people in between, also researchers and theorists.

To guide that work, we’re working to setup collaboration and confrontation between historical approaches and today’s bleeding-edge computational work. Media artist Helena Nikonole became conceptual artist of the Lab. She will bring her interests in connecting AI with new aesthetics and media, as she has exhibited from ZKM to CTM to Garage Museum of Contemporary Art. Dr. Konstantin Yakovlev joins as one of Russia’s leading mathematicians and computer scientists working at the forefront of AI, machine learning, and smart robotics – meaning we’re guaranteed some of the top technical talent. (Warning: crash course likely.)

Russia has an extraordinarily rich culture of artistic and engineering exploration, in AI as elsewhere. Some of that work was seen recently at Berlin’s CTM Festival exhibition. Helena for her part has created work that, among others, applies machine learning to unraveling the structure of birdsong (with a bird-human translator perhaps on the horizon), and hacked into Internet-connected CCTV cameras and voice synthesis to meld machine learning-generated sacred texts with … well, some guys trapped in an elevator. See below:

Bird Language

deus X mchn

I’m humbled to get to work with them and in one of the world’s great musical cities, because I hope we also get to see how these new models relate to older ones, and where gaps lie in music theory and computation. (We’re including some musicians/composers with serious background in these fields, and some rich archives that haven’t been approached like this ever before.)

I came from a musicology background, so I see in so-called “AI” a chance to take musicology and theory closer to the music, not further away. Google recently presented a Bach ”doodle” – more on that soon, in fact – with the goal of replicating some details of Bach’s composition. To those of us with a music theory background, some of the challenges of doing that are familiar: analyzing music is different from composing it, even for the human mind. To me, part of why it’s important to attempt working in this field is that there’s a lot to learn from mistakes and failures.

It’s not so much that you’re making a robo-Bach – any more than your baroque theory class will turn all the students into honorary members of the extended Bach family. (Send your CV to your local Lutheran church!) It’s a chance to find new possibilities in this history we might not have seen before. And it lets us test (and break) our ideas about how music works with larger sets of data – say, all of Bach’s cantatas at once, or a set of jazz transcriptions, or a library full of nothing but different kick drums, if you like. This isn’t so much about testing “AI,” whatever you want that to mean – it’s a way to push our human understanding to its limits.

Oh yes, and we’ll definitely be pushing our own human limits – in a fun way, I’m sure.

A small group of participants will be involved in the heart of St. Petersburg from May 11-22, with time to investigate and collaborate, plus inputs (including at the massive Planetarium No. 1).

But where this gets really interesting – and do expect to follow along here on CDM – is that we will wind up in July with an AI mainstage at the globally celebrated Gamma Festival. Artist participants will create their own AI-inspired audiovisual performances and improvisations, acoustic and electronic hybrids, and new live scenarios. The finalists will be invited to the festival and fully covered in terms of expenses.

So just as I’ve gotten to do with partners at CTM Festival (and recently with southeast Asia’s Nusasonic), we’re making the ultimate laboratory experiment in front of a live audience. Research, make, rave, repeat.

The open call deadline is fast approaching if you think you might want to participate.

Facebook event
http://gammafestival.ru/english

To apply:
Participation at GAMMA_LAB AI is free for the selected candidates. Send a letter of intent and portfolio to aiworkshop@artypical.com by end of day April 8, 2019. Participants have to bring personal computers of sufficient capacity to work on their projects during the Laboratory. Transportation and living expenses during the Laboratory are paid by the participants themselves. The organizers provide visa support, as well as the travel of the best Lab participants to GAMMA festival in July.

The post Exploring machine learning for music, live: Gamma_LAB AI appeared first on CDM Create Digital Music.

dadamachines doppler is a new platform for open music hardware

The new doppler board promises to meld the power of FPGA brains with microcontrollers and the accessibility of environments like Arduino. And the founder is so confident that could lead to new stuff, he’s making a “label” to help share your ideas.

doppler is a small, 39EUR development board packing both an ARM microcontroller and an FPGA. It could be the basis of music controllers, effects, synths – anything you can make run on those chips.

If this appeals to you, we’ve even got a CDM-exclusive giveaway for inventors with ideas. (Now, end users, this may all go over your head but … rest assured the upshot for you should be, down the road, more cool toys to play with. Tinkerers, developers, and people with a dangerous appetite for building things – read on.)

But first – why include an FPGA on a development board for music?

The pitch for FPGA

The FPGA is a powerful but rarified circuit. The idea is irresistible: imagine a circuit that could be anything you want to be, rewired as easily as software. That’s kind of what an FPGA is – it’s a big bundle of programmable logic blocks and memory blocks. You get all of that computational power at comparatively low cost, with the flexibility to adapt to a number of tasks. The upshot of this is, you get something that performs like dedicated, custom-designed hardware, but that can be configured on the fly – and with terrific real-time performance.

This works well for music and audio applications, because FPGAs do work in “close to the metal” high performance contexts. And we’ve even seen them used in some music gear. (Teenage Engineer was an early FPGA adopter, with the OP-1.) The challenge has always been configuring this hardware for use, which could easily scare off even some hardware developers.

For more on why open FPGA development is cool, here’s a (nerdy) slide deck: https://fpga.dev/oshug.pdf

Now, all of what I’ve just said a little hard to envision. Wouldn’t it be great if instead of that abstract description, you could fire up the Arduino development environment, upload some cool audio code, and have it running on an FPGA?

doppler, on a breadboard connected to other stuff so it starts to get more musically useful. Future modules could also make this easier.

doppler: easier audio FPGA

doppler takes that FPGA power, and combines it with the ease of working with environments like Arduino. It’s a chewing gum-sized board with both a familiar ARM microcontroller and an FPGA. This board is bare-bones – you just get USB – but the development tools have been set up for you, and you can slap this on a breadboard and add your own additions (MIDI, audio I/O).

The project is led by Johannes Lohbihler, dadamachines founder, along with engineer and artist Sven Braun.

dadamachines also plan some other modules to make it easier to add other stuff us music folks might like. Want audio in and out? A mic preamp? MIDI connections? A display? Controls? Those could be breakout boards, and it seems Johannes and dadamachines are open to ideas for what you most want. (In the meantime, of course, you can lay out your own stuff, but these premade modules could save time when prototyping.)

Full specs of the tiny, core starter board:

120Mhz ARM Cortex M4F MCU 512KB Flash (Microchip ATSAMD51G19A) with FPU
– FPGA 5000 LUT, 1MBit RAM, 6 DSP Cores,OSC, PLL (Lattice ICE40UP5K)
– Arduino IDE compatible
– Breadboard friendly (DIL48)
– Micro USB
– Power over USB or external via pin headers
– VCC 3.5V …. 5.5V
– All GPIO Pins have 3.3V Logic Level
– 1 LED connected to SAMD51
– 4 x 4 LED Matrix (connected to FPGA)
– 2 User Buttons (connected to FPGA)
– AREF Solder Jumper
– I2C (need external pullup), SPI, QSPI Pins
– 2 DAC pins, 10 ADC pins
– Full open source toolchain
– SWD programming pin headers
– Double press reset to enter the bootloader
– UF2 Bootloader with Firmware upload via simple USB stick mode

See also the quickstart PDF.

I’ve focused on the FPGA powers here, because those are the new ones, but the micrcontroller side brings compatibility with existing libraries that allow you to combine some very useful features.

So, for instance, there’s USB host capability, which allows connecting all sorts of input devices, USB MIDI gadgets, and gaming controllers. See:

https://github.com/gdsports/USB_Host_Library_SAMD

That frees up the FPGA to do audio only. Flip it around the other way, and you can use the microcontroller for audio, while the FPGA does … something else. The Teensy audio library will work on this chip, too – meaning a bunch of adafruit instructional content will be useful here:

https://learn.adafruit.com/synthesizer-design-tool?view=all

https://github.com/adafruit/Audio/

doppler is fully open source hardware, with open firmware and code samples, so it’s designed to be easy to integrate into a finished product – even one you might sell commercially.

The software examples for now are mainly limited to configuring and using the board, so you’ll still need to bring your own code for doing something useful. But you can add the doppler as an Arduino library and access even the FPGA from inside the Arduino environment, which expands this to a far wider range of developers.

Look, ma, Arduino!

In a few steps, you can get up and running with the development environment, on any OS. You’ll be blinking lights and even using a 4×4 matrix of lights to show characters, just as easily as you would on an Arduino board – only you’re using an FPGA.

Getting to that stage is lunch break stuff if you’ve at least worked with Arduino before:

https://github.com/dadamachines/doppler

Dig into the firmware, and you can see, for instance, some I/O and a synth working. (This is in progress, it seems, but you get the idea.)

https://github.com/dadamachines/doppler-FPGA-firmware

And lest you think this is going to be something esoteric for experienced embedded hardware developers, part of the reason it’s so accessible is that Johannes is working with Sven Braun. Sven is among other things the developer of iOS apps zmors synth and modular – so you get something that’s friendly to app developers.

doppler in production…

A label for hardware, platform for working together

Johannes tells us there’s more to this than just tossing an open source board out into the world – dadamachines is also inviting collaborations. They’ve made doppler a kind of calling card for working together, as well as a starting point for building new hardware ideas, and are suggesting Berlin-based dadamachines as a “label” – a platform to develop and release those ideas as products.

There are already some cool, familiar faces playing with these boards – think Meng Qi, Tom Whitwell of Music thing, and Ornament & Crime.

Johannes and his dadamachines have already a proven hardware track record, bringing a product from Kickstarter funding to manufacturing, with the automat. It’s an affordable device that makes it easy to connect physical, “robotic” outputs (like solenoids and motors). (New hardware, a software update and more are planned for that, too, by the way.) And of course, part of what you get in doing that kind of hardware is a lot of amassed experience.

We’ve seen fertile open platforms before – Arduino and Raspberry Pi have each created their own ecosystems of both hardware and education. But this suggests working even more closely – pooling space, time, manufacturing, distribution, and knowledge together.

This might be skipping a few steps – even relatively experienced developers may want to see what they can do with this dev board first. But it’s an interesting long-range goal that Johannes has in mind.

Want your own doppler; got ideas?

We have five doppler boards to give away to interested CDM readers.

Just tell dadamachines what you want to make, or connect, or use, and email that idea to:

cdm@dadamachines.com

dadamachines will pick five winners to get a free board sent to them. (Johannes says he wants to do this by lottery, but I’ve said if there are five really especially good ideas or submissions, he should… override the randomness.)

And stay tuned here, as I hope to bring you some more stuff with this soon.

For more:

https://forum.dadamachines.com/

https://dadamachines.com/product/doppler/

The post dadamachines doppler is a new platform for open music hardware appeared first on CDM Create Digital Music.

Azure Kinect promises new motion, tracking for art

Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.

Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.

And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.

History time

A full ten years ago, I was writing about the Microsoft project and interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.

For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.

We’re now on the third major revision of the camera hardware.

2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.

2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).

Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.

Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.

2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.

  • Active IR tracking in the dark
  • Wider field of vision
  • 6 skeletons (people) instead of two
  • More tracking features, with additional joints and creepier features like heart rate and facial expression
  • 1080p color camera
  • Faster performance/throughput (which was key to more expressive results)

Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”

And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.

Everything old is new again

I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)

It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.

It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.

So why am I even bothering to write this?

Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.

So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.

And oh yeah – the next generation looks very powerful.

Kinect: The Next Generation

Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.

That is definitely backwards from how this is normally meant to work.

But the good news here is unexpected. Kinect was lost, and now is found.

The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.

So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.

And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.

For a really good write-up, you’ll want to read this great run-down:


All you need to know on Azure Kinect
[The Ghost Howls, a VR/tech blog, see also a detailed run-down of HoloLens 2 which also just came out]

Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.

Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.

1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer

And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)

Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).

Downers: 30 fps operation, limited range.

Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:

That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.

And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.

All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.

Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.

So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.

Azure Kinect DK preorder / product page

aka.ms/kinectdocs

The post Azure Kinect promises new motion, tracking for art appeared first on CDM Create Digital Music.

Turbocharge KORG’s Prologue synth with Sinevibes

Synth hot-rodding? Earlier this year, KORG introduced the notion of their synth as extensible platform, by adding an SDK for their Prologue polysynth. The only question was, what would developers do with it – and now we get one answer.

Sinevibes, the small shop of Ukrainian developer Artemiy Pavlov, has been known for clever, elegant Mac plug-ins (even if there’s a lot more). But Artemiy has decided to embrace hardware as one of the first developers for KORG’s Prologue synth. And the results are unique and lovely, letting you transform the oscillators on KORG’s instrument with new spectrally satisfying waveshaping oscillators.

Basically, it’s a plug-in for your hardware.

Here’s what you get:

Juicy, edgy wavetables are the order of the day. Specs from Sinevibes:

  • Two sine oscillators with variable balance, frequency ratio and beating frequency.
  • Five different waveshaping algorithms with continuously variable curve complexity.
  • Built-in lag filters for noise-free, ultra-smooth parameter adjustment and modulation.
  • Built-in envelope generator with widely adjustable attack and decay times (1 ms to 10 s).

Check it out, or buy it for just US$29, with full manual and example patches:

http://www.sinevibes.com/korgturbo/

It’s interesting – we live in a music tech industry that benefits from small scale and diversity. Now, this model is well known in Apple’s App Store, but sure enough that hasn’t necessarily been a no-brainer for independent music developers. So, instead, we see creative music engineers developing for Eurorack (which frees them of the burden of making complete enclosures and power supplies, and lets them interoperate with an ecology of a bunch of manufacturers). Or we see them continuing to see plug-in development as paying for their time – especially with new opportunities like those afforded by software modular environments. And now KORG are in the game with hardware plug-ins.

What’s changed in part is the expectation of reducing development overhead but targeting more varied platforms. So you might make a plug-in for a software modular (VCV Rack, Cherry Audio Voltage Modular), and port it as a Rack Extension for Reason, and then ship the same algorithm for use on KORG’s hardware – or some other combination.

It’s encouraging, though, that in a world where consolidation rules, music tech remains weird and fragmented. A company like KORG will ship a lot of synths – but it’s great that they might also support a tiny or one-person developer, by letting their users’ customize their instrument to their liking. And it means you get a Prologue that might be different than someone else’s.

Previously:

KORG has a polyphonic Prologue synth – and it’s programmable

KORG are about to unveil their DIY Prologue boards for synth hacking

The post Turbocharge KORG’s Prologue synth with Sinevibes appeared first on CDM Create Digital Music.

Reason 10.3 will improve VST performance – here’s how

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

Cherry Audio Voltage Modular: a full synth platform, open to developers

Hey, hardware modular – the computer is back. Cherry Audio’s Voltage Modular is another software modular platform. Its angle: be better for users — and now, easier and more open to developers, with a new free tool.

Voltage Modular was shown at the beginning of the year, but its official release came in September – and now is when it’s really hitting its stride. Cherry Audio’s take certainly isn’t alone; see also, in particular, Softube Modular, the open source VCV Rack, and Reason’s Rack Extensions. Each of these supports live patching of audio and control signal, hardware-style interfaces, and has rich third-party support for modules with a store for add-ons. But they’re all also finding their own particular take on the category. That means now is suddenly a really nice time for people interested in modular on computers, whether for the computer’s flexibility, as a supplement to hardware modular, or even just because physical modular is bulky and/or out of budget.

So, what’s special about Voltage Modular?

Easy patching. Audio and control signals can be freely mixed, and there’s even a six-way pop-up multi on every jack, so each jack has tons of routing options. (This is a computer, after all.)

Each jack can pop up to reveal a multi.

It’s polyphonic. This one’s huge – you get true polyphony via patch cables and poly-equipped modules. Again, you know, like a computer.

It’s open to development. There’s now a free Module Designer app (commercial licenses available), and it’s impressively easy to code for. You write DSP in Java, and Cherry Audio say they’ve made it easy to port existing code. The app also looks like it reduces a lot of friction in this regard.

There’s an online store for modules – and already some strong early contenders. You can buy modules, bundles, and presets right inside the app. The mighty PSP Audioware, as well as Vult (who make some of my favorite VCV stuff) are already available in the store.

There’s an online store for free and paid add-ons – modules and presets. But right now, a hundred bucks gets you started with a bunch of stuff right out of the gate.

Voltage Modular is a VST/AU/AAX plug-in and runs standalone. And it supports 64-bit double-precision math with zero-latency module processes – but, impressively in our tests, isn’t so hard on your CPU as some of its rivals.

Right now, Voltage Modular Core + Electro Drums are on sale for just US$99.

Real knobs and patch cords are fun, but … let’s be honest, this is a hell of a lot of fun, too.

For developers

So what about that development side, if that interests you? Well, Apple-style, there’s a 70/30 split in developers’ favor. And it looks really easy to develop on their platform:

Java may be something of a bad word to developers these days, but I talked to Cherry Audio about why they chose it, and it definitely makes some sense here. Apart from being a reasonably friendly language, and having unparalleled support (particularly on the Internet connectivity side), Java solves some of the pitfalls that might make a modular environment full of third-party code unstable. You don’t have to worry about memory management, for one. I can also imagine some wackier, creative applications using Java libraries. (Want to code a MetaSynth-style image-to-sound module, and even pull those images from online APIs? Java makes it easy.)

Just don’t think of “Java” as in legacy Java applications. Here, DSP code runs on a Hotspot virtual machine, so your DSP is actually running as machine language by the time it’s in an end user patch. It seems Cherry have also thought through GUI: the UI is coded natively in C++, while you can create custom graphics like oscilloscopes (again, using just Java on your side). This is similar to the models chosen by VCV and Propellerhead for their own environments, and it suggests a direction for plug-ins that involves far less extra work and greater portability. It’s no stretch to imagine experienced developers porting for multiple modular platforms reasonably easily. Vult of course is already in that category … and their stuff is so good I might almost buy it twice.

Or to put that in fewer words: the VM can match or even best native environments, while saving developers time and trouble.

Cherry also tell us that iOS, Linux, and Android could theoretically be supported in the future using their architecture.

Of course, the big question here is installed user base and whether it’ll justify effort by developers, but at least by reducing friction and work and getting things rolling fairly aggressively, Cherry Audio have a shot at bypassing the chicken-and-egg dangers of trying to launch your own module store. Plus, while this may sound counterintuitive, I actually think that having multiple players in the market may call more attention to the idea of computers as modular tools. And since porting between platforms isn’t so hard (in comparison to VST and AU plug-in architectures), some interested developers may jump on board.

Well, that and there’s the simple matter than in music, us synth nerds love to toy around with this stuff both as end users and as developers. It’s fun and stuff. On that note:

Modulars gone soft

Stay tuned; I’ve got this for testing and will let you know how it goes.

https://cherryaudio.com/voltage-modular

https://cherryaudio.com/voltage-module-designer

The post Cherry Audio Voltage Modular: a full synth platform, open to developers appeared first on CDM Create Digital Music.

Inside Cypher2, and what could be a more expressive future for synths

For all the great sounds they can make, software synths eventually fit a repetitive mold: lots of knobs onscreen, simplistic keyboard controls when you actually play. ROLI’s Cypher2 could change that. Lead developer Angus chats with us about why.

Angus Hewlett has been in the plug-in synth game a while, having founded his own FXpansion, maker of various wonderful software instruments and drums. That London company is now part of another London company, fast-paced ROLI, and thus has a unique charge to make instruments that can exploit the additional control potential of ROLI’s controllers. The old MIDI model – note on, note off, and wheels and aftertouch that impact all notes at once – gives way to something that maps more of the synth’s sounds to the gestures you make with your hands.

So let’s nerd out with Angus a bit about what they’ve done with Cypher2, the new instrument. Background:

A soft synth that’s made to be played with futuristic, expressive control

Peter: Okay, Cypher2 is sounding terrific! Who made the demos and so on?

Angus: Demos – Rafael Szaban, Heen-Wah Wai, Rory Dow. Sound Design – Rory Dow, Mayur Maha, Lawrence King & Rafael Szaban

Can you tell us a little bit about what architecture lies under the hood here?

Sure – think of it as a multi-oscillator subtractive synth. Three oscillators with audio-rate intermodulation (FM, S&H, waveshape modulation and ring mod), each switchable between Saw and Sin cores. Then you’ve got two waveshapers (each with a selection of analogue circuit models and tone controls, and a couple of digital wavefolders), and two filters, each with a choice of five different analogue filter circuit models – two variations on the diode ladder type, OTA ladder, state variable, Sallen-Key – and a digital comb filter. Finally, you’ve got a polyphonic, twin stereo output amp stage which gives you a lot of control over how the signal hits the effects chain – for example, you can send just the attack of every note to the “A” chain and the sustain/release phase to the “B” chain, all manner of possibilities there.

Controlling all of that, you’ve got our most powerful TransMod yet. 16 assignable modulation slots, each with over a hundred possible sources to choose from, everything from basics like Velocity and LFO through to function processors, step sequencers, paraphonic mod sources and other exotics. Then there’s eight fixed-function mod slots to support the five dimensions of MPE control and the three performance macros. So 24 TransMods in total, three times as many as v1.

Okay, so Cypher2 is built around MPE, or MIDI Polyphonic Expression. For those readers just joining us, this is a development of the existing MIDI specification that standardizes additional control around polyphonic inputs – that is, instead of adding expression to the whole sound all at once, you can get control under each finger, which makes way more sense and is more fun to play. What does it mean to build a synth around MPE control? How did you think about that in designing it?

It’s all about giving the sound designers maximum possibility to create expressive sound, and to manage how their sound behaves across the instrument’s range. When you’re patching for a conventional synth, you really only need to think about pitch and velocity: does the sound play nicely across the keyboard. With 5D MPE sounds, sound designers start having to think more like a software engineer or a game world designer – there’s so many possibilities for how the player might interact with the sound, and they’ve got to have the tools to make it sound musical and believable across the whole range.

What this translates to in the specific case of Cypher2 is adapting our TransMod system (which is, at its heart, a sophisticated modulation matrix) to make it easy for sound designers to map the various MPE control inputs, via dynamically controllable transfer function curves, on to any and every parameter on the synth.

How does this relate to your past line of instruments?

Clearly, Cypher2 is a successor to the original Cypher which was one of the DCAM Synth Squad synths; it inherits many of the same functional upgrades that Strobe 2 gained over its predecessor a couple of years ago – the extended TransMod system, the effects engine, the Retina-friendly, scalable, skinnable GUI – but goes further, and builds on a lot of user and sound-designer feedback we had from Strobe2. So the modulation system is friendlier, the effects engine is more powerful, and it’s got a brand new and much more powerful step-sequencer and arpeggiator. In terms of its relationship to the original Cypher – the overall layout is similar, but the oscillator section has been upgraded with the sine cores and additional FM paths; the shaper section gains wavefolders and tone controls; the filters have six circuits to chose from, up from two in the original, so there’s a much wider range of tones available there; the envelopes give you more choice of curve responses; the LFOs each have a sub oscillator and quadrature outputs; and obviously there’s MPE as described above.

Of course, ROLI hope that folks will use this with their hardware, naturally. But since part of the beauty is that this is open on MPE, any interesting applications working with some other MPE hardware; have you tried it out on non-ROLI stuff (or with testers, etc.)?

Yes, we’ve tried it (with Linnstrument, mainly), and yes, it very much works – although with one caveat. Namely, MPE, as with MIDI, is a protocol which specifies how devices should talk to one another – but it doesn’t specify, at a higher level, what the interaction between the musician and their sound should feel like.

That’s a problem that I actually first encountered during the development of BFD2 in the mid-2000s: “MIDI Velocity 0-127” is adequate to specify the interaction between a basic keyboard and a sound module, and some of the more sophisticated stage controller boards (Kurzweil, etc.) have had velocity curves at least since the 90s. But as you increase the realism and resolution of the sounds – and BFD2 was the first time we really did so in software to the extent that it became a problem – it becomes apparent that MIDI doesn’t specify how velocity should map on to dB, or foot-pounds-per-second force equivalent, or any real-world units.

That’s tolerable for a keyboard, where a discerning user can set one range for the whole instrument, but when you’re dealing with a V-Drums kit with, potentially, ten or twelve pads, of different types, to set up, and little in the way of a standard curve to aim for, the process becomes cumbersome and off-putting for the end-user. What does “Velocity 72” actually mean from Manufacturer A’s snare drum controller, at a sensitivity setting B, via drum brain C triggering sample D?

Essentially, you run into something of an Uncanny Valley effect (a term from the world of movies / games where, as computer generated graphics moved from obviously artificial 8-bit pixel art to today’s motion-captured, super-sampled cinematic epics, paradoxically audiences would in some cases be less satisfied with the result). So it’s certainly a necessary step to get expressive hardware and software talking to one another – and MPE accomplishes that very nicely indeed – but it’s not sufficient to guarantee that a patch will result in a satisfactory, believable playing experience OOTB.

Some sound-synth-controller-player combinations will be fine, others may not quite live up to expectations, but right now I think it’s natural to expect that it may be a bit hit-and-miss. Feedback on this is something I’d like to actively encourage, we have a great dialogue with the other hardware vendors and are keen for to achieve a high standard of interoperation, but it’s a learning process for all involved.

Thanks, Angus! I’ll be playing with Cypher2 and seeing what I can do with it – but fascinating to hear this take on synths and control mapping. More food for thought.

https://fxpansion.com/products/cypher2/

http://roli.com/

The post Inside Cypher2, and what could be a more expressive future for synths appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.