Eye see: visual live-programming vvvv comes together online

Visual development environment vvvv is at it again, with a worldwide meetup of leading artists – and a ground-up new release, too.

The post Eye see: visual live-programming vvvv comes together online appeared first on CDM Create Digital Music.

Numerical Audio releases MM-1 Mute Master programmable mixer for iPad

Numerical Audio MM1 feat

Numerical Audio has announced availability of the MM-1 Mute Master app for iOS, a unique AUv3 effect plugin designed to create automated or generative arrangements by automatically mixing multiple tracks inside a DAW. MM-1 allows you to create modular style arrangements where individual tracks are brought in and out based on a set number of […]

The post Numerical Audio releases MM-1 Mute Master programmable mixer for iPad appeared first on rekkerd.org.

Asteroids advanced sequencer for Ableton Live FREE for limited time

Mark Towers Asteroids

Isotonik Studios has announced that Mark Towers’ Asteroids is available as a free download for a limited time. The Max for Live generative sequencer that can be used on its own or integrated with Ableton Live supported controllers. Designed and created by Ableton Certified Trainer Mark Towers the device takes its inspiration from the hours […]

The post Asteroids advanced sequencer for Ableton Live FREE for limited time appeared first on rekkerd.org.

Save 60% on Sugar Bytes Obscurium synthesizer, on sale for $39.99 USD

Sugar Bytes Obscurium Sale

Audio Plugin Deals has launched a sale on the Obscurium software synthesizer by Sugar Bytes, offering a 60% discount on the instrument that features a dazzling array of organic and lively sounds, delivering spherical pads, bubbly arpeggios and deadly percussion attacks. OBSCURIUM is a productive synthesis tool with VST Hosting & Generative Pitch engine. It […]

The post Save 60% on Sugar Bytes Obscurium synthesizer, on sale for $39.99 USD appeared first on rekkerd.org.

Amazon’s AWS DeepComposer is peak not-not-knowing-what-AI-is-for

AI can be cool. AI can be strange. AI can be promising, or frightening. Here’s AI at totally uncool and not frightening at all – bundled with a crappy MIDI keyboard, for … some … reason.

Okay, so TL:DR – Amazon published some kinda me-too algorithms for music generation that were what we’ve seen for years from Google, Sony, Microsoft, and hundreds of data scientists, bundled a crap MIDI keyboard for $99, and it’s the future! AI! I mean, it definitely doesn’t just sound like a 90s General MIDI keyboard with some bad MIDI patterns.” “The machine has the power of literally all of music composition ever. Now anyone can make musiER:Jfds;kjsfj l; jks

Oops, sorry, I might have briefly started banging my head against my computer keyboard. I’m back.

This is worth talking about because machine learning does have potential – and this neither represents that potential nor accurately represents what machine learning even is.

Game changer.

If at this point you’re unsure what AI is, how you should feel about it, or even if you should care – don’t worry, you’re seriously not alone. “AI” is now largely shorthand for “machine learning.” And that, in turn, now most often refers to a very specific set of techniques currently in vogue that can analyze data and generate predictions by deriving patterns from that data, and not by using rules. That’s a big deal in music, because traditionally both computer models and even paper models of theory have used rules more than they have a probability. You can think of AI in music as related to a dice role – a very, very well-informed, data-driven, weighted dice role – and less like a theory manual or a robotic composer or whatever people have in mind.

Wait a minute – that doesn’t sound like AI at all. Ah, yes. About that.

So, what I’ve just described counts as AI to data scientists, even though it isn’t really related very much to AI in science fiction and popular understanding. The problem is, clarifying that distinction is hard, whereas exploiting that misunderstanding is lucrative. Misrepresenting it makes the tech sound more advanced than arguably it really is, which could be useful if you’re in the business of selling that tech. Ruh-roh.

With that in mind, what Amazon just did is either very dangerous or – weirdly, actually, very useful, because it’s such total, obvious bulls*** that it hopefully makes clear to even laypeople that what they claim they’re doing isn’t what they’re demonstrating. So we get post-curtain-reveal Oz – here, in the form of Amazon AI chief Dr. Matt Wood, pulling off a bad clone of Steve Jobs (even black-and-denim, of course).

Dr. Matt Wood does really have a doctorate in bioinformatics, says LinkedIn. He knows his stuff. That makes this even more maddening.

Let’s imagine his original research, which was predicting protein structures. You know what most of us wouldn’t do? Presumably, we wouldn’t stand in front of a packed auditorium and pretend to understand protein structures, if we aren’t a microbiologist. And we certainly wouldn’t go on to claim predicting protein structures meant we could create life, and also, we’re God now.

But that is essentially what this is, with music – and it is exceedingly weird, from the moment Amazon’s VP of AI is introduced by… I want to say a voiceover by a cowboy?

Summary of his talk: AI can navigate moon rovers and fix teeth. So therefore, it should replace composers – right? (I can do long division in my head. Ergo, next I will try time travel.) We need a product, so give us a hundred bucks, and we’ll give you a developer kit that has a MIDI keyboard and that’s the future of music. We’ll also claim this is an industry first, because we bundled a MIDI keyboard.

At 7 minutes, 57 seconds, Dr. Wood murders Beethoven’s ghost, followed by at 8:30 by sort of bad machine learning example augmented with GarageBand visuals and some floating particles that I guess are the neural net “thinking”?

Then you get Jonathan Coulton (why, JoCo, why?) attempting to sing over something that sounds like a stuck-MIDI-note Band-in-a-Box that just crashed.

Even by AI tech demo standards, it’s this:

Deeper question: I’m not totally certain what has earned us in music the expectation from the rest of society that, not only is what we do already not worth paying for, but everyone should be able to do it, without expending any effort. I don’t have this expectation of neuroscience or basketball, for instance.

But this isn’t even about that. This doesn’t even hold up to student AI examples from three years ago.

It’s “the world’s first” because they give you a MIDI keyboard. But great news – we can beat them. The AWS DeepComposer isn’t shipping yet, so you can actually be the world’s first right now – just grab a USB cable, a MIDI keyboard, connect to one of a half-dozen tools that do the same thing, and you’re done. I’ll give you an extra five minutes to map the MIDI keys.

Or just skip the AI, plug in a MIDI keyboard, and let your cat walk over it.

Translating the specs then:

  1. A s***ty MIDI keyboard with some buttons on it, and no “AI.”
  2. Some machine learning software, with pre-trained generative models for “rock, pop, jazz, and classical.” (aka, and saying this as a white person with a musicology background, “white, white, black-but-white people version, really old white.”)
  3. “Share your creations by publishing your tracks to SoundCloud in just a few clicks from the AWS DeepComposer console.”*

Technically *1 has been available in some form since the mid-80s and *3 is true of any music software connected to the Internet, but … *2, AI! (Please, please say I’m wrong and there’s custom silicon in there for training. Something. Anything to make this make any sense at all.)

I would love to hear I’m wrong and there’s some specialized machine learning silicon embedded in the keyboard but… uh, guessing that’s a no.

Watch the trainwreck now, soon to join the annals of “terrible ideas in tech” history with Microsoft Bob and Google Glass:

https://aws.amazon.com/deepcomposer/

By the way, don’t forget that AWS is being actively targeted right now by the music community with a boycott. Maybe they were hoping for a Springtime for Hitler-style turn-around, like if this is bad enough, we’d love them again? Dunno.

Anyway, if you do want to try this “AI” stuff out – and it can really be interesting – here is a far more comprehensive and musically interesting set of tools from rival Google:

https://magenta.tensorflow.org

Now back to our regularly scheduled programming of anything but this.

AI: I am the button.

The post Amazon’s AWS DeepComposer is peak not-not-knowing-what-AI-is-for appeared first on CDM Create Digital Music.

Devious Machines Texture multi-fx plugin on sale for $59 USD

Devious Machines Texture

Plugin Boutique has launched a sale on the Texture effect plugin by Devious Machines, offering a 40% discount for a limited time. Texture comes with over 340 sampled, granular and generative sound sources to enhance, shape and transform your sounds. Part FX plug-in, part-synth; Texture synthesises new layers which track the dynamics of your sound. […]

The post Devious Machines Texture multi-fx plugin on sale for $59 USD appeared first on rekkerd.org.

#I-Am-the-world: Open call for musicians to take part in the world’s biggest collaboration in A minor

Mubert I am the World

Mubert has reached out to its creative community and musicians all over the globe to invite them on board for grand generative music collaboration. Up until November 15th everyone interested to make a contribution to an everlasting seamless music stream in A minor is encouraged to send one sample by following a simple instruction. If […]

The post #I-Am-the-world: Open call for musicians to take part in the world’s biggest collaboration in A minor appeared first on rekkerd.org.

In Session Audio releases Riff Generation: Outside In Edition for Kontakt

In Session Audio Riff Generation Outside In Edition

In Session Audio has announced the release of the Riff Generation: Outside In Edition, a Kontakt Player library that creates song parts by combining acoustic, electric and synthetic sounds with effects. Riff Generation: Outside In Edition features all new sampled material. Based around a set of musical parameters that you control, Riff Generation: Outside In […]

The post In Session Audio releases Riff Generation: Outside In Edition for Kontakt appeared first on rekkerd.org.

How to patch 3D visuals in browser from Ableton Live, more with cables.gl

Now, even your browser can produce elaborate, production-grade eye candy using just some Ableton Live MIDI clock. The question of how to generate visuals to go with music starts to get more and more interesting answers.

And really, why not? In that moment of inspiration, how many of us see elaborate fantastic imagery as we listen to (or dream about) music. It’s just been that past generative solutions were based on limited rules, producing overly predictable results. (That’s the infamous “screensaver” complaint.) But quietly, even non-gaming machines have been adding powerful 3D visualization – and browsers now have access to hardware acceleration for a uniform interface.

cables.gl remains in invite-only beta, though if you go request one (assuming this article doesn’t overwhelm one), you can find your way in. And for now, it’s also totally free, making this a great way to play around. (Get famous, get paid, buy licenses for this stuff – done.)

MIDI clock can run straight into the browser, so you can sync visuals easily with Ableton Live. (Ableton Link is overkill for that application, given that visuals run at framerate.) That will work with other software, hardware, modular, whatever you have, too.

For a MIDI/DJ example, here’s a tutorial for TRAKTOR PRO. Obviously this can be adapted to other tools, as well. (Maybe some day Pioneer will even decide to put MIDI clock on the CDJ. One can dream.)

They’ve been doing some beautiful work in tutorials, too, including WeaveArray and ColorArray, since I last checked in.

Check out the full project and request an invite:
https://cables.gl/

By the way, note those cool visuals at the top. That’s not video – that’s cables.gl actually running in your browser right now.

Previously, our introduction:

The post How to patch 3D visuals in browser from Ableton Live, more with cables.gl appeared first on CDM Create Digital Music.

An injury left Olafur Arnalds unable to play, so he turned to machines

Following nerve damage, Icelandic composer/producer/musician was unable to play the piano. With his ‘Ghost Pianos’, he gets that ability back, through intelligent custom software and mechanical pianos.

It’s moving to hear him tell the story (to the CNN viral video series) – with, naturally, the obligatory shots of Icelandic mountains and close-up images of mechanical pianos working. No complaints:

This frames accessibility in terms any of us can understand. Our bodies are fragile, and indeed piano history is replete with musicians who lost the original use of their two hands and had to adapt. Here, an accident caused him to lose left hand dexterity, so he needed a way to connect one hand to more parts.

And in the end, as so often is the case with accessibility stories and music technology, he created something that was more than what he had before.

With all the focus on machine learning, a lot of generative algorithmic music continues to work more traditionally. That appears to be the case here – the software analyzes incoming streams and follows rules and music theory to accompany the work. (As I learn more about machine learning, though, I suspect the combination of these newer techniques with the older ones may slowly yield even sharper algorithms – and challenge us to hone our own compositional focus and thinking.)

I’ll try to reach out to the developers, but meanwhile it’s fun squinting at screenshots as you can tell a lot. There’s a polyphonic step sequencer / pattern sequencer of sorts in there, with some variable chance. You can also tell in the screen shots that the pattern lengths are set to be irregular, so that you get these lovely polymetric echoes of what Olafur is playing.

Of course, what makes this most interesting is that Olafur responds to that machine – human echoes of the ‘ghost.’ I’m struck by how even a simple input can do this for you – like even a basic delay and feedback. We humans are extraordinarily sensitive to context and feedback.

The music itself is quite simple – familiar minimalist elements. If that isn’t your thing, you should definitely keep watching so you get to his trash punk stage. But it won’t surprise you at all that this is a guy who plays Clapping Music backstage – there’s some serious Reich influence.

You can hear the ‘ghost’ elements in the reent release ‘ekki hugsa’, which comes with some lovely joyful dancing in the music video:

re:member debuted the software:

There is a history here of adapting composition to injury. (That’s not even including Robert Schumann, who evidently destroyed his own hands in an attempt to increase dexterity.)

Paul Wittgenstein had his entire right arm amputated following World War I injury, commissioned a number of works for just the left hand. (There’s a surprisingly extensive article on Wikipedia, which definitely retrieves more than I had lying around inside my brain.) Ravel’s Piano Concerto for the Left Hand is probably the best-known result, and there’s even a 1937 recording by Wittgenstein himself. It’s an ominous, brooding performance, made as Europe was plunging itself into violence a second time. But it’s notable in that it’s made even more virtuosic in the single hand – it’s a new kind of piano idiom, made for this unique scenario.

I love Arnalds’ work, but listening to the Ravel – a composer known as whimsical, crowd pleasing even – I do lament a bit of what’s been lost in the push for cheery, comfortable concert music. It seems to me that some of that dark and edge could come back to the music, and the circumstances of the composition in that piece ought to remind us how necessary those emotions are to our society.

I don’t say that to diss Mr. Arnalds. On the contrary, I would love to hear some of his punk side return. And his quite beautiful music aside, I also hope that these ideas about harnessing machines in concert music may also find new, punk, even discomforting conceptions among some readers here.

Here’s a more intimate performance, including a day without Internet:

And lastly, more detail on the software:

Meanwhile, whatever kind of music you make, you should endeavor to have a promo site that is complete, like this – also, sheet music!

olafurarnalds.com

Previously:

The KellyCaster reveals what accessibility means for instruments

The post An injury left Olafur Arnalds unable to play, so he turned to machines appeared first on CDM Create Digital Music.