Turn your iPad or iPhone into a scriptable MIDI tool with Mozaic

Its creator describes it as a “workshop in a plug-in.” Mozaic lets you turn your iOS device into a MIDI filter/controller that does whatever you want – a toolkit for making your own MIDI gadgets.

The beauty of this, of course, is that you can have whatever tools you want without having to wait for someone else to make them for you. Developer Bram Bos has been an innovator in music software for years – he created one of the first drum machines, among some ground-breaking (and sometimes weird) plug-ins, and now is one of the more accomplished iOS developers. So you can vouch for the quality of this one. It might move my iPad Pro back into must-have territory.

Bram writes to CDM that he thought this kind of DIY plug-in could let you make what you need:

“I noticed there is a lot of demand for MIDI filters and plugins (such as Rozeta) in the mobile music world,” he says,”especially with the rising popularity of DAW-less, modular plugin-based jamming and music making. Much of this demand is highly specific and difficult to satisfy with general purpose apps. So I decided to make it easier for people to create such plugins themselves.”

You get ready-to-use LFOs, graphic interface layouts, musical scales, random generators, and “a very easy-to-learn, easy-to-read script language.” And yeah, don’t be afraid, first-time programmers, Bram says: “I’ve designed the language from the ground up to be as accessible and readable as possible.”

To get you started, you’ll find example scripts and modular-style filters, and a big preset collection – with more coming, in response to your requests, Bram tells us. There’s a programming manual, meant both to get beginners going in as friendly a way as possible, and to give more advanced scripters and in-depth guide. And you get plenty of real-world examples.

There are some things you can do with your iOS gadget that you can’t do with most MIDI gadgets, too – like map your tilt sensors to MIDI.

This is an AUv3-compatible plug-in so you can use it in hosts like AUM, ApeMatrix, Cubasis, Nanostudio 2, Audiobus 3, and the like.

Full description/specs:

Mozaic runs inside your favorite AU MIDI host, and gives you practical building blocks such as LFOs, pre-fab GUI layouts, musical scales, AUv3 support (with AU Parameters, transport events, tempo syncing, etc.), random generators and a super-simple yet powerful script language. Mozaic even offers quick access to your device’s Tilt Sensors for expressive interaction concepts!

The Mozaic Script language is designed from the ground up to be the easiest and most flexible MIDI language on iOS. A language by creatives, for creatives. You’ll only need to write a few lines of script to achieve impressive things – or to create that uber-specific thing that was missing from your MIDI setup.

Check out the Programming Manual on Ruismaker.com to learn about the script language and to get inspiration for awesome scripts of your own.

Mozaic comes with a sizable collection of tutorials and pre-made scripts which you can use out of the box, or which can be a starting point for your own plugin adventures.

Features in a nutshell:

– Easy to learn Mozaic Script language: easy to learn, easy to read
– Sample-accurate-everything: the tightest MIDI timing possible
– Built-in script editor with code-completion, syntax hints, etc.
– 5 immediately usable GUI layouts, with knobs, sliders, pads, etc.
– In-depth, helpful programming manual available on Ruismaker.com
– Easy access to LFOs, scales, MIDI I/O, AU parameters, timers
– AUv3; so you’ll get multi-instance, state-saving, tempo sync and resource efficiency out of the box

Mozaic opens up the world of creative MIDI plugins to anyone willing to put in a few hours and a hot beverage or two.

Practical notes:
– Mozaic requires a plugin host with support for AUv3 MIDI plugins (AUM, ApeMatrix, Cubasis, Auria, Audiobus 3, etc.)
– The standalone mode of Mozaic lets you edit, test and export projects, but for MIDI connections you need to run it inside an AUv3 MIDI host
– MIDI is not sound; Mozaic on its own does not make noise… so bring your own synths, drum machines and other instruments!
– AUv3 MIDI requires iOS11 or higher

With some other MIDI controllers looking long in the tooth, and Liine’s Lemur also getting up in years, I wonder if this might not be the foundation for a universal controller/utility for music. So, yeah, I’d love to see some more touch-savvy widgets, OSC, and even Android support if this catches on. Now go forth, readers, and help it catch on!

Mozaic on the iTunes App Store

http://ruismaker.com/

The post Turn your iPad or iPhone into a scriptable MIDI tool with Mozaic appeared first on CDM Create Digital Music.

Automated techno: Eternal Flow generates dance music for you

Techno, without all those pesky human producers? Petr Serkin’s Eternal Flow is a generative radio station – and even a portable device – able to make endless techno and deep house variations automatically.

You can run a simple version of Eternal Flow right in your browser:

https://eternal-flow.ru/

Recorded sessions are available on a SoundCloud account, as well:

But maybe the most interesting way to run this is in a self-contained portable device. It’s like a never-ending iPod of … well, kind of generic-sounding techno and deep house, depending on mode. Here’s a look at how it works; there’s no voiceover, but you can turn on subtitles for additional explanation:

There are real-world applications here: apart from interesting live performance scenarios, think workout dance music that follows you as you run, for example.

I talked to Moscow-based artist Petr about how this works. (And yeah, he has his own deep house-tinged record label, too.)

“I used to make deep and techno for a long period of time,” he tells us, “so I have some production patterns.” Basically, take those existing patterns, add some randomization, and instead of linear playback, you get material generated over a longer duration with additional variation.

There was more work involved, too. While the first version used one-shot snippets, “later I coded my own synth engine,” Petr tells us. That means the synthesized sounds save on sample space in the mobile version.

It’s important to note this isn’t machine learning – it’s good, old-fashioned generative music. And in fact this is something you could apply to your own work: instead of just keeping loads and loads of fixed patterns for a live set, you can use randomization and other rules to create more variation on the fly, freeing you up to play other parts live or make your recorded music less repetitive.

And this also points to a simple fact: machine learning doesn’t always generate the best results. We’ve had generative music algorithms for many years, which simply produce results based on simple rules. Laurie Spiegel’s ground-breaking Magic Mouse, considered by many to be the first-ever software synth, worked on this concept. So, too, did the Brian Eno – Peter Chilvers “Bloom,” which applied the same notion to ambient generative software and became the first major generative/never-ending iPhone music app.

By contrast, the death metal radio station I talked about last week works well partly because its results sound so raunchy and chaotic. But it doesn’t necessarily suit dance music as well. Just because neural network-based machine learning algorithms are in vogue right now doesn’t mean they will generate convincing musical results.

I suspect that generative music will freely mix these approaches, particularly as developers become more familiar with them.

But from the perspective of a human composer, this is an interesting exercise not necessarily because it puts yourself out of a job, but that it helps you to experiment with thinking about the structures and rules of your own musical ideas.

And, hey, if you’re tired of having to stay in the studio or DJ booth and not get to dance, this could solve that, too.

More:

http://eternal-flow.ru/

Now ‘AI’ takes on writing death metal, country music hits, more

Thanks to new media artist and researcher Helena Nikonole for the tip!

The post Automated techno: Eternal Flow generates dance music for you appeared first on CDM Create Digital Music.

Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNNICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

Exploring machine learning for music, live: Gamma_LAB AI

AI in music is as big a buzzword as in other fields. So now’s the time to put it to the test – to reconnect to history, human practice, and context, and see what holds up. That’s the goal of the Gamma_LAB AI in St. Petersburg next month. An open call is running now.

Machine learning for AI has trended so fast that there are disconnects between genres and specializations. Mathematicians or coders may get going on ideas without checking whether they work with musicians or composers or musicologists – and the other way around.

I’m excited to join as one of the hosts with Gamma_LAB AI partly because it brings together all those possible disciplines, puts international participants in an intensive laboratory, and then shares the results in one of the summer’s biggest festivals for new electronic music and media. We’ll make some of those connections because those people will finally be together in one room, and eventually on one live stage. That investigation can be critical, skeptical, and can diverge from clichéd techniques – the environment is wide open and packed with skills from an array of disciplines.

Natalia Fuchs, co-producer of GAMMA Festival, founder of ARTYPICAL and media art historian, is curating Gamma_LAB AI. The lab will run in May in St. Petersburg, with an open call due this Monday April 8 (hurry!), and then there will be a full AI-stage as a part of Gamma Festival.

Image: Helena Nikonole.

Invited participants will delve into three genres – baroque, jazz, and techno. The idea is not just a bunch of mangled generative compositions, but a broad look at how machine learning could analyze deep international archives of material in these fields, and how the work might be used creatively as an instrument or improviser. We expect participants with backgrounds in musicianship and composition as well as in coding, mathematics, and engineering, and people in between, also researchers and theorists.

To guide that work, we’re working to setup collaboration and confrontation between historical approaches and today’s bleeding-edge computational work. Media artist Helena Nikonole became conceptual artist of the Lab. She will bring her interests in connecting AI with new aesthetics and media, as she has exhibited from ZKM to CTM to Garage Museum of Contemporary Art. Dr. Konstantin Yakovlev joins as one of Russia’s leading mathematicians and computer scientists working at the forefront of AI, machine learning, and smart robotics – meaning we’re guaranteed some of the top technical talent. (Warning: crash course likely.)

Russia has an extraordinarily rich culture of artistic and engineering exploration, in AI as elsewhere. Some of that work was seen recently at Berlin’s CTM Festival exhibition. Helena for her part has created work that, among others, applies machine learning to unraveling the structure of birdsong (with a bird-human translator perhaps on the horizon), and hacked into Internet-connected CCTV cameras and voice synthesis to meld machine learning-generated sacred texts with … well, some guys trapped in an elevator. See below:

Bird Language

deus X mchn

I’m humbled to get to work with them and in one of the world’s great musical cities, because I hope we also get to see how these new models relate to older ones, and where gaps lie in music theory and computation. (We’re including some musicians/composers with serious background in these fields, and some rich archives that haven’t been approached like this ever before.)

I came from a musicology background, so I see in so-called “AI” a chance to take musicology and theory closer to the music, not further away. Google recently presented a Bach ”doodle” – more on that soon, in fact – with the goal of replicating some details of Bach’s composition. To those of us with a music theory background, some of the challenges of doing that are familiar: analyzing music is different from composing it, even for the human mind. To me, part of why it’s important to attempt working in this field is that there’s a lot to learn from mistakes and failures.

It’s not so much that you’re making a robo-Bach – any more than your baroque theory class will turn all the students into honorary members of the extended Bach family. (Send your CV to your local Lutheran church!) It’s a chance to find new possibilities in this history we might not have seen before. And it lets us test (and break) our ideas about how music works with larger sets of data – say, all of Bach’s cantatas at once, or a set of jazz transcriptions, or a library full of nothing but different kick drums, if you like. This isn’t so much about testing “AI,” whatever you want that to mean – it’s a way to push our human understanding to its limits.

Oh yes, and we’ll definitely be pushing our own human limits – in a fun way, I’m sure.

A small group of participants will be involved in the heart of St. Petersburg from May 11-22, with time to investigate and collaborate, plus inputs (including at the massive Planetarium No. 1).

But where this gets really interesting – and do expect to follow along here on CDM – is that we will wind up in July with an AI mainstage at the globally celebrated Gamma Festival. Artist participants will create their own AI-inspired audiovisual performances and improvisations, acoustic and electronic hybrids, and new live scenarios. The finalists will be invited to the festival and fully covered in terms of expenses.

So just as I’ve gotten to do with partners at CTM Festival (and recently with southeast Asia’s Nusasonic), we’re making the ultimate laboratory experiment in front of a live audience. Research, make, rave, repeat.

The open call deadline is fast approaching if you think you might want to participate.

Facebook event
http://gammafestival.ru/english

To apply:
Participation at GAMMA_LAB AI is free for the selected candidates. Send a letter of intent and portfolio to aiworkshop@artypical.com by end of day April 8, 2019. Participants have to bring personal computers of sufficient capacity to work on their projects during the Laboratory. Transportation and living expenses during the Laboratory are paid by the participants themselves. The organizers provide visa support, as well as the travel of the best Lab participants to GAMMA festival in July.

The post Exploring machine learning for music, live: Gamma_LAB AI appeared first on CDM Create Digital Music.

dadamachines doppler is a new platform for open music hardware

The new doppler board promises to meld the power of FPGA brains with microcontrollers and the accessibility of environments like Arduino. And the founder is so confident that could lead to new stuff, he’s making a “label” to help share your ideas.

doppler is a small, 39EUR development board packing both an ARM microcontroller and an FPGA. It could be the basis of music controllers, effects, synths – anything you can make run on those chips.

If this appeals to you, we’ve even got a CDM-exclusive giveaway for inventors with ideas. (Now, end users, this may all go over your head but … rest assured the upshot for you should be, down the road, more cool toys to play with. Tinkerers, developers, and people with a dangerous appetite for building things – read on.)

But first – why include an FPGA on a development board for music?

The pitch for FPGA

The FPGA is a powerful but rarified circuit. The idea is irresistible: imagine a circuit that could be anything you want to be, rewired as easily as software. That’s kind of what an FPGA is – it’s a big bundle of programmable logic blocks and memory blocks. You get all of that computational power at comparatively low cost, with the flexibility to adapt to a number of tasks. The upshot of this is, you get something that performs like dedicated, custom-designed hardware, but that can be configured on the fly – and with terrific real-time performance.

This works well for music and audio applications, because FPGAs do work in “close to the metal” high performance contexts. And we’ve even seen them used in some music gear. (Teenage Engineer was an early FPGA adopter, with the OP-1.) The challenge has always been configuring this hardware for use, which could easily scare off even some hardware developers.

For more on why open FPGA development is cool, here’s a (nerdy) slide deck: https://fpga.dev/oshug.pdf

Now, all of what I’ve just said a little hard to envision. Wouldn’t it be great if instead of that abstract description, you could fire up the Arduino development environment, upload some cool audio code, and have it running on an FPGA?

doppler, on a breadboard connected to other stuff so it starts to get more musically useful. Future modules could also make this easier.

doppler: easier audio FPGA

doppler takes that FPGA power, and combines it with the ease of working with environments like Arduino. It’s a chewing gum-sized board with both a familiar ARM microcontroller and an FPGA. This board is bare-bones – you just get USB – but the development tools have been set up for you, and you can slap this on a breadboard and add your own additions (MIDI, audio I/O).

The project is led by Johannes Lohbihler, dadamachines founder, along with engineer and artist Sven Braun.

dadamachines also plan some other modules to make it easier to add other stuff us music folks might like. Want audio in and out? A mic preamp? MIDI connections? A display? Controls? Those could be breakout boards, and it seems Johannes and dadamachines are open to ideas for what you most want. (In the meantime, of course, you can lay out your own stuff, but these premade modules could save time when prototyping.)

Full specs of the tiny, core starter board:

120Mhz ARM Cortex M4F MCU 512KB Flash (Microchip ATSAMD51G19A) with FPU
– FPGA 5000 LUT, 1MBit RAM, 6 DSP Cores,OSC, PLL (Lattice ICE40UP5K)
– Arduino IDE compatible
– Breadboard friendly (DIL48)
– Micro USB
– Power over USB or external via pin headers
– VCC 3.5V …. 5.5V
– All GPIO Pins have 3.3V Logic Level
– 1 LED connected to SAMD51
– 4 x 4 LED Matrix (connected to FPGA)
– 2 User Buttons (connected to FPGA)
– AREF Solder Jumper
– I2C (need external pullup), SPI, QSPI Pins
– 2 DAC pins, 10 ADC pins
– Full open source toolchain
– SWD programming pin headers
– Double press reset to enter the bootloader
– UF2 Bootloader with Firmware upload via simple USB stick mode

See also the quickstart PDF.

I’ve focused on the FPGA powers here, because those are the new ones, but the micrcontroller side brings compatibility with existing libraries that allow you to combine some very useful features.

So, for instance, there’s USB host capability, which allows connecting all sorts of input devices, USB MIDI gadgets, and gaming controllers. See:

https://github.com/gdsports/USB_Host_Library_SAMD

That frees up the FPGA to do audio only. Flip it around the other way, and you can use the microcontroller for audio, while the FPGA does … something else. The Teensy audio library will work on this chip, too – meaning a bunch of adafruit instructional content will be useful here:

https://learn.adafruit.com/synthesizer-design-tool?view=all

https://github.com/adafruit/Audio/

doppler is fully open source hardware, with open firmware and code samples, so it’s designed to be easy to integrate into a finished product – even one you might sell commercially.

The software examples for now are mainly limited to configuring and using the board, so you’ll still need to bring your own code for doing something useful. But you can add the doppler as an Arduino library and access even the FPGA from inside the Arduino environment, which expands this to a far wider range of developers.

Look, ma, Arduino!

In a few steps, you can get up and running with the development environment, on any OS. You’ll be blinking lights and even using a 4×4 matrix of lights to show characters, just as easily as you would on an Arduino board – only you’re using an FPGA.

Getting to that stage is lunch break stuff if you’ve at least worked with Arduino before:

https://github.com/dadamachines/doppler

Dig into the firmware, and you can see, for instance, some I/O and a synth working. (This is in progress, it seems, but you get the idea.)

https://github.com/dadamachines/doppler-FPGA-firmware

And lest you think this is going to be something esoteric for experienced embedded hardware developers, part of the reason it’s so accessible is that Johannes is working with Sven Braun. Sven is among other things the developer of iOS apps zmors synth and modular – so you get something that’s friendly to app developers.

doppler in production…

A label for hardware, platform for working together

Johannes tells us there’s more to this than just tossing an open source board out into the world – dadamachines is also inviting collaborations. They’ve made doppler a kind of calling card for working together, as well as a starting point for building new hardware ideas, and are suggesting Berlin-based dadamachines as a “label” – a platform to develop and release those ideas as products.

There are already some cool, familiar faces playing with these boards – think Meng Qi, Tom Whitwell of Music thing, and Ornament & Crime.

Johannes and his dadamachines have already a proven hardware track record, bringing a product from Kickstarter funding to manufacturing, with the automat. It’s an affordable device that makes it easy to connect physical, “robotic” outputs (like solenoids and motors). (New hardware, a software update and more are planned for that, too, by the way.) And of course, part of what you get in doing that kind of hardware is a lot of amassed experience.

We’ve seen fertile open platforms before – Arduino and Raspberry Pi have each created their own ecosystems of both hardware and education. But this suggests working even more closely – pooling space, time, manufacturing, distribution, and knowledge together.

This might be skipping a few steps – even relatively experienced developers may want to see what they can do with this dev board first. But it’s an interesting long-range goal that Johannes has in mind.

Want your own doppler; got ideas?

We have five doppler boards to give away to interested CDM readers.

Just tell dadamachines what you want to make, or connect, or use, and email that idea to:

cdm@dadamachines.com

dadamachines will pick five winners to get a free board sent to them. (Johannes says he wants to do this by lottery, but I’ve said if there are five really especially good ideas or submissions, he should… override the randomness.)

And stay tuned here, as I hope to bring you some more stuff with this soon.

For more:

https://forum.dadamachines.com/

https://dadamachines.com/product/doppler/

The post dadamachines doppler is a new platform for open music hardware appeared first on CDM Create Digital Music.

Flash Sale: Save 40% off Syntorial training software & synth plugin

Audible Genius Syntorial 40 OFF

Plugin Boutique has launched a sale on Syntorial, the video game-like training software by Audible Genius that teaches you how to program synth patches by ear. Syntorial includes lesson packs for popular synths such as Xfer Serum, LennarDigital Sylenth1 and Native Instruments Massive. With almost 200 lessons, combining video demonstrations with interactive challenges, you’ll get […]

The post Flash Sale: Save 40% off Syntorial training software & synth plugin appeared first on rekkerd.org.

TidalCycles, free live coding environment for music, turns 1.0

Live coding environments are free, run on the cheapest hardware as well as the latest laptops, and offer new ways of thinking about music and sound that are leading a global movement. And one of the leading tools of that movement just hit a big milestone.

This isn’t just about a nerdy way of making music. TidalCycles is free, and tribes of people form around using it. Just as important as how impressive the tool may be, the results are spectacular and varied.

There are some people who take on live coding as their primary instrument – some who haven’t had experiencing using even computers or electronic music production tools before, let alone whole coding environments. But I think they’re worth a look even if you don’t envision yourself projecting code onstage as you type live. TidalCycles in particular had its origins not in computer science, but in creator Alex McLean’s research into rhythm and cycle. It’s a way of experiencing a musical idea as much as it is a particular tool.

TidalCycles has been one of the more popular tools, because it’s really easy to learn and musical. The one downside is a slightly convoluted install process, since it’s built on SuperCollider, as opposed to tools that now run in a Web browser. On the other hand, the payoff for that added work is you’ll never outgrow TidalCycles itself – because you can move to SuperCollider’s wider arrange of tools if you choose.

New in version 1.0 is a whole bunch of architectural improvement that really makes the environment feel mature. And there’s one major addition: controller input means you can play TidalCycles like an instrument, even without coding as your perform:
New functions
Updated innards
New ways of combining patterns
Input from live controllers
The ability to set tempo with patterns

Maybe just as important as the plumbing improvements, you also get expanded documentation and an all-new website.

Check out the full list of changes:

https://tidalcycles.org/index.php/Changes_in_Tidal_1.0.0

You’ll need to update some of your code as there’s been some renaming and so on.

But the ability to input OSC and MIDI is especially cool, not least because you can now “play” all the musical, rhythmic stuff TidalCycles does with patterns.

There’s enough musicality and sonic power in TidalCycles that it’s easy to imagine some people will take advantage of the live coding feedback as they create a patch, but play more in a conventional sense with controllers. I’ll be honest; I couldn’t quite wrap my head around typing code as the performance element in front of an audience. And that makes some sense; some people who aren’t comfortable playing actually find themselves more comfortable coding – and those people aren’t always programmers. Sometimes they’re non-programmers who find this an easier way to express themselves musically. Now, you can choose, or even combine the two approaches.

Also worth saying – TidalCycles has happened partly because of community contributions, but it’s also the work primarily of Alex himself. You can keep him doing this by “sending a coffee” – TidalCycles works on the old donationware model, even as the code itself is licensed free and open source. Do that here:

http://ko-fi.com/yaxulive#

While we’ve got your attention, let’s look at what you can actually do with TidalCycles. Here’s our friend Miri Kat with her new single out this week, the sounds developed in that environment. It’s an ethereal, organic trip (the single is also on Bandcamp):

We put out Miri’s album Pursuit last year, not really having anything to do with it being made in a livecoding environment so much as I was in love with the music – and a lot of listeners responded the same way:

For an extended live set, here’s Alex himself playing in November in Tokyo:

And Alexandra Cardenas, one of the more active members of the TidalCycles scene, played what looked like a mind-blowing set in Bogota recently. On visuals is Olivia Jack, who created vibrant, eye-searing goodness in the live coding visual environment of her own invention, Hydra. (Hydra works in the browser, so you can try it right now.)

Unfortunately there are only clips – you had to be there – but here’s a taste of what we’re all missing out on:

See also the longer history of Tidal

It’ll be great to see where people go next. If you haven’t tried it yet, you can dive in now:

https://tidalcycles.org/

Image at top: Alex, performing as part of our workshop/party Encoded in Berlin in June.

The post TidalCycles, free live coding environment for music, turns 1.0 appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

The guts of Tracktion are now open source for devs to make new stuff

Game developers have Unreal Engine and Unity Engine. Well, now it’s audio’s turn. Tracktion Engine is an open source engine based on the guts of a major DAW, but created as a building block developers can use for all sorts of new music and audio tools.

You can new music apps not only for Windows, Mac, and Linux (including embedded platforms like Raspberry Pi), but iOS and Android, too. And while developers might go create their own DAW, they might also build other creative tools for performance and production.

The tutorials section already includes examples for simple playback, independent manipulation of pitch and time (meaning you could conceivably turn this into your own DJ deck), and a step sequencer.

We’ve had an open source DAW for years – Ardour. But this is something different – it’s clear the developers have created this with the intention of producing a reusable engine for other things, rather than just dumping the whole codebase for an entire DAW.

Okay, my Unreal and Unity examples are a little optimistic – those are friendly to hobbyists and first-time game designers. Tracktion Engine definitely needs you to be a competent C++ programmer.

But the entire engine is delivered as a JUCE module, meaning you can drop it into an existing project. JUCE has rapidly become the go-to for reasonably painless C++ development of audio tools across plug-ins and operating systems and mobile devices. It’s huge that this is available in JUCE.

Even if you’re not a developer, you should still care about this news. It could be a sign that we’ll see more rapid development that allows music loving developers to try out new ideas, both in software and in hardware with JUCE-powered software under the hood. And I think with this idea out there, if it doesn’t deliver, it may spur someone else to try the same notion.

I’ll be really interested to hear if developers find this is practical in use, but here’s what they’re promising developers will be able to use from their engine:

A wide range of supported platforms (Windows, macOS, Linux, Raspberry Pi, iOS and Android)
Tempo, key and time-signature curves
Fast audio file playback via memory mapping
Audio editing including time-stretching and pitch shifting
MIDI with quantisation, groove, MPE and pattern generation
Built-in and external plugin support for all the major formats
Parameter adjustments with automation curves or algorithmic modifiers
Modular plugin patching Racks
Recording with punch, overdub and loop modes along with comp editing
External control surface support
Fully customizable rendering of arrangements

The licensing is also stunningly generous. The code is under a GPLv3 license – meaning if you’re making a GPLv3 project (including artists doing that), you can freely use the open source license.

But even commercial licensing is wide open. Educational projects get forum support and have no revenue limit whatsoever. (I hope that’s a cue to academic institutions to open up some of their licensing, too.)

Personal projects are free, too, with revenue up to US$50k. (Not to burst anyone’s bubble, but many small developers are below that threshold.)

For $35/mo, with a minimum 12 month commitment, “indie” developers can make up to $200k. Enterprise licensing requires getting in touch, and then offers premium support and the ability to remove branding. They promise paid licenses by next month.

Check out their code and the Tracktion Engine page:

https://www.tracktion.com/develop/tracktion-engine

https://github.com/Tracktion/tracktion_engine/

I think a lot of people will be excited about this, enough so that … well, it’s been a long time. Let’s Ballmer this.

The post The guts of Tracktion are now open source for devs to make new stuff appeared first on CDM Create Digital Music.

Cycling ’74 releases Max 8 incl. multi-channel MC & performance improvements

Cycling 74 Max 8Cycling ’74 has announced the release of Max 8, a major upgrade to the visual programming software. Max 8 includes MC, allowing for objects and patch cords to contain multiple audio channels by simply typing mc. before the name of any MSP object. It also comes with speed improvements of up to 2x on Mac […]