Wavehole Approach to Granular Synthesis Using Xenakis Screens

For those interested, see here.

See here for previous posts featuring Xenakis and granular synthesis.

What or rather who is Xenakis? Via Wikipedia: Iannis Xenakis – “Xenakis pioneered the use of mathematical models in music such as applications of set theory, stochastic processes and game theory and was also an important influence on the development of electronic and computer music. He

Visualize pitch like John Coltrane with this mystical image

Some musicians see Islamic mysticism; some the metaphysics of Einstein. But whether spiritual, theoretical, or both, even one John Coltrane pitch wheel is full of musical inspiration.

One thing’s certain – if you want your approach to pitch to be as high-tech and experimental as your instruments, Coltrane’s sketchbook could easily keep you busy for a lifetime.

Unpacking the entire music-theoretical achievements of John Coltrane could fill tomes – even this one picture has inspired a wide range of different interpretations. But let’s boil it down just to have a place to start. At its core, the Coltrane diagram is a circle of fifths – a way of representing the twelve tones of equal temperament in a continuous circle, commonly used in Western music theory (jazz, popular, and classical alike). And any jazz player has some basic grasp of this and uses it in everything from soloing to practice scales and progressions.

What makes Coltrane’s version interesting is the additional layers of annotation – both for what’s immediately revealing, and what’s potentially mysterious.

Sax player and blogger Roel Hollander pulled together a friendly analysis of what’s going on here. And while he himself is quick to point out he’s not an expert Coltrane scholar, he’s one a nice job of compiling some different interpretations.

JOHN COLTRANE’S TONE CIRCLE

See also Corey Mwamba’s analysis, upon which a lot of that story draws

Open Culture has commented a bit on the relations to metaphysics and the interpretation of various musicians, including vitally Yusef Lateef’s take:

John Coltrane Draws a Picture Illustrating the Mathematics of Music

Plus if you like this sort of thing, you owe it to yourself to find a copy of Yusef Lateef’s Repository of Scales and Melodic Patterns [Peter Spitzer blog review] – that’s related to what you (might) see here.

Take it with some grains of salt, since there doesn’t seem to be a clear story to why Coltrane even drew this, but there are some compelling details to this picture. The two-ring arrangement gives you two whole tone scales – one on C, and one on B – in such a way that you get intervals of fourths and fifths if you move diagonally between them.

Scanned image of the mystical Coltrane tone doodle.

Corey Mwamba simplified diagram.

That’s already a useful way of visualizing relations of keys in a whole tone arrangement, which could have various applications for soloing or harmonies. Where this gets more esoteric is the circled bits, which highlight some particular chromaticism – connected further by a pentagram highlighting common tones.

Even reading that crudely, this can be a way of imagining diminished/double diminished melodic possibilities. Maybe the most suggestive take, though, is deriving North Indian-style modes from the circled pitches. Whether that was Coltrane’s intention or not, this isn’t a bad way of seeing those modal relationships.

You can also see some tritone substitutions and plenty of chromaticism and the all-interval tetrachord if you like. Really, what makes this fun is that like any such visualization, you can warp it to whatever you find useful – despite all the references to the nature of the universe, the essence of music is that you’re really free to make these decisions as the mood strikes you.

I’m not sure this will help you listen to Giant Steps or A Love Supreme with any new ears, but I am sure there are some ideas about music visualization or circular pitch layouts to try out. (Yeah, I might have to go sketch that on an iPad this week.)

(Can’t find a credit for this music video, but it’s an official one – more loosely interpretive and aesthetic than functional, released for Untitled Original 11383. Maybe someone knows more… UMG’s Verve imprint put out the previously unreleased Both Directions At Once: The Lost Album last year.)

How might you extend what’s essentially a (very pretty) theory doodle to connecting Coltrane to General Relativity? Maybe it’s fairer to say that Coltrane’s approach to mentally freeing himself to find the inner meaning of the cosmos is connected, spiritually and creatively. Sax player and astrophysicist professor (nice combo) Stephon Alexander makes that cultural connection. I think it could be a template for imagining connections between music culture and physics, math, and cosmology.

Images (CC-BY-ND) Roel’s World / Roel Hollander

and get ready to get lost there:

https://roelhollander.eu/

The post Visualize pitch like John Coltrane with this mystical image appeared first on CDM Create Digital Music.

Why is this Valentine’s song made by an AI app so awful?

Do you hate AI as a buzzword? Do you despise the millennial whoop? Do you cringe every time Valentine’s Day arrives? Well – get ready for all those things you hate in one place. But hang in there – there’s a moral to this story.

Now, really, the song is bad. Like laugh-out-loud bad. Here’s iOS app Amadeus Code “composing” a song for Valentine’s Day, which says love much in the way a half-melted milk chocolate heart does, but – well, I’ll let you listen, millennial pop cliches and all:

Fortunately this comes after yesterday’s quite stimulating ideas from a Google research team – proof that you might actually use machine learning for stuff you want, like improved groove quantization and rhythm humanization. In case you missed that:

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Now, as a trained composer / musicologist, I do find this sort of exercise fascinating. And on reflection, I think the failure of this app tells us a lot – not just about machines, but about humans. Here’s what I mean.

Amadeus Code is an interesting idea – a “songwriting assistant” powered by machine learning, delivered as an app. And it seems machine learning could generate, for example, smarter auto accompaniment tools or harmonizers. Traditionally, those technologies have been driven by rigid heuristics that sound “off” to our ears, because they aren’t able to adequately follow harmonic changes in the way a human would. Machine learning could – well, theoretically, with the right dataset and interpretation – make those tools work more effectively. (I won’t re-hash an explanation of neural network machine learning, since I got into that in yesterday’s article on Magenta Studio.)

https://amadeuscode.com/

You might well find some usefulness from Amadeus, too.

This particular example does not sound useful, though. It sounds soulless and horrible.

Okay, so what happened here? Music theory at least cheers me up even when Valentine’s Day brings me down. Here’s what the developers sent CDM in a pre-packaged press release:

We wanted to create a song with a specific singer in mind, and for this demo, it was Taylor Swift. With that in mind, here are the parameters we set in the app.

Bpm set to slow to create a pop ballad
To give the verses a rhythmic feel, the note length settings were set to “short” and also since her vocals have great presence below C, the note range was also set from low~mid range.
For the chorus, to give contrast to the rhythmic verses, the note lengths were set longer and a wider note range was set to give a dynamic range overall.

After re-generating a few ideas in the app, the midi file was exported and handed to an arranger who made the track.

Wait – Taylor Swift is there just how, you say?

Taylor’s vocal range is somewhere in the range of C#3-G5. The key of the song created with Amadeus Code was raised a half step in order to accommodate this range making the song F3-D5.

From the exported midi, 90% of the topline was used. The rest of the 10% was edited by the human arranger/producer: The bass and harmony files are 100% from the AC midi files.

Now, first – these results are really impressive. I don’t think traditional melodic models – theoretical and mathematical in nature – are capable of generating anything like this. They’ll tend to fit melodic material into a continuous line, and as a result will come out fairly featureless.

No, what’s compelling here is not so much that this sounds like Taylor Swift, or that it sounds like a computer, as it sounds like one of those awful commercial music beds trying to be a faux Taylor Swift song. It’s gotten some of the repetition, some of the basic syncopation, and oh yeah, that awful overused millennial whoop. It sounds like a parody, perhaps because partly it is – the machine learning has repeated the most recognizable cliches from these melodic materials, strung together, and then that was further selected / arranged by humans who did the same. (If the machines had been left alone without as much human intervention, I suspect the results wouldn’t be as good.)

In fact, it picks up Swift’s ticks – some of the funny syncopations and repetitions – but without stringing them together, like watching someone do a bad impression. (That’s still impressive, though, as it does represent one element of learning – if a crude one.)

To understand why this matters, we’re going to have to listen to a real Taylor Swift song. Let’s take this one:i’

Okay, first, the fact that the real Taylor Swift song has words is not a trivial detail. Adding words means adding prosody – so elements like intonation, tone, stress, and rhythm. To the extent those elements have resurfaced as musical elements in the machine learning-generated example, they’ve done so in a way that no longer is attached to meaning.

No amount of analysis, machine or human, can be generative of lyrical prosody for the simple reason that analysis alone doesn’t give you intention and play. A lyricist will make decisions based on past experience and on the desired effect of the song, and because there’s no real right or wrong to how do do that, they can play around with our expectations.

Part of the reason we should stop using AI as a term is that artificial intelligence implies decision making, and these kinds of models can’t make decisions. (I did say “AI” again because it fits into the headline. Or, uh, oops, I did it again. AI lyricists can’t yet hammer “oops” as an interjection or learn the playful setting of that line – again, sorry.)

Now, you can hate the Taylor Swift song if you like. But it’s catchy not because of a predictable set of pop music rules so much as its unpredictability and irregularity – the very things machine learning models of melodic space are trying to remove in order to create smooth interpolations. In fact, most of the melody of “Blank Space” is a repeated tonic note over the chord progression. Repetition and rhythm are also combined into repeated motives – something else these simple melodic models can’t generate, by design. (Well, you’ll hear basic repetition, but making a relationship between repeated motives again will require a human.)

It may sound like I’m dismissing computer analysis. I’m actually saying something more (maybe) radical – I’m saying part of the mistake here is assuming an analytical model will work as a generative model. Not just a machine model – any model.

This mistake is familiar, because almost everyone who has ever studied music theory has made the same mistake. (Theory teachers then have to listen to the results, which are often about as much fun as these AI results.)

Music theory analysis can lead you to a deeper understanding of how music works, and how the mechanical elements of music interrelate. But it’s tough to turn an analytical model into a generative model, because the “generating” process involves decisions based on intention. If the machine learning models sometimes sound like a first year graduate composition student, that may be that the same student is steeped in the analysis but not in the experience of decision making. But that’s important. The machine learning model won’t get better, because while it can keep learning, it can’t really make decisions. It can’t learn from what it’s learned, as you can.

Yes, yes, app developers – I can hear you aren’t sold yet.

For a sense of why this can go deep, let’s turn back to this same Taylor Swift song. The band Imagine Dragons picked it up and did a cover, and, well, the chord progression will sound more familiar than before.

As it happens, in a different live take I heard the lead singer comment (unironically) that he really loves Swift’s melodic writing.

But, oh yeah, even though pop music recycles elements like chord progressions and even groove (there’s the analytic part), the results take on singular personalities (there’s the human-generative side).

“Stand by Me” dispenses with some of the ticks of our current pop age – millennial whoops, I’m looking at you – and at least as well as you can with the English language, hits some emotional meaning of the words in the way they’re set musically. It’s not a mathematical average of a bunch of tunes, either. It’s a reference to a particular song that meant something to its composer and singer, Ben E. King.

This is his voice, not just the emergent results of a model. It’s a singer recalling a spiritual that hit him with those same three words, which sets a particular psalm from the Bible. So yes, drum machines have no soul – at least until we give them one.

“Sure,” you say, “but couldn’t the machine learning eventually learn how to set the words ‘stand by me’ to music”? No, it can’t – because there are too many possibilities for exactly the same words in the same range in the same meter. Think about it: how many ways can you say these three words?

“Stand by me.”

Where do you put the emphasis, the pitch? There’s prosody. What melody do you use? Keep in mind just how different Taylor Swift and Ben E. King were, even with the same harmonic structure. “Stand,” the word, is repeated as a suspension – a dissonant note – above the tonic.

And even those observations still lie in the realm of analysis. The texture of this coming out of someone’s vocal cords, the nuances to their performance – that never happens the same way twice.

Analyzing this will not tell you how to write a song like this. But it will throw light on each decision, make you hear it that much more deeply – which is why we teach analysis, and why we don’t worry that it will rob music of its magic. It means you’ll really listen to this song and what it’s saying, listen to how mournful that song is.

And that’s what a love song really is:

If the sky that we look upon
Should tumble and fall
Or the mountain should crumble to the sea
I won’t cry, I won’t cry
No, I won’t shed a tear
Just as long as you stand
Stand by me

Stand by me.

Now that’s a love song.

So happy Valentine’s Day. And if you’re alone, well – make some music. People singing about hearbreak and longing have gotten us this far – and it seems if a machine does join in, it’ll happen when the machine’s heart can break, too.

PS – let’s give credit to the songwriters, and a gentle reminder that we each have something to sing that only we can:
Singer Ben E. King, Best Known For ‘Stand By Me,’ Dies At 76 [NPR]

The post Why is this Valentine’s song made by an AI app so awful? appeared first on CDM Create Digital Music.

Make Noise are turning a classic 1972 synthesis book into a video series

Even as modular synths make a comeback, the definitive work on the topic languishes out of print since its 1972 publication. But now, one synth maker is translating its ideas to video.

The folks at Make Noise, who have been one of the key makers behind Eurorack’s growth (and a leader in on the American side of the pond), have gone all the way back to 1972 to find a reference to the fundamentals behind modular synthesis.

“Where do I find a textbook on modular synthesis?” isn’t an easy question to answer. A lot of understanding modular comes from a weird combination of received knowledge, hearsay, various example patches (some of them also dating back to the 60s and 70s), and bits and pieces scattered around print and online.

But Allen Strange’s Electronic Music: Systems, techniques, and controls covers actual theory. It treats the notions of modular synthesis as a fundamental set of skills. It’s just now out of print, and a used copy could cost you $200-300 because of automated online pricing (whether anyone would actually pay that).

So it’s great to see Make Noise take this on – if nothing else, as a way to frame teaching their own modules.

And… uh, you might find a PDF of the original text. (I think most people read my own book in pirated form, especially in its Russian and Polish translations – seriously – so I’m looking at this myself as a writer and sometimes educator and pondering what the best way is to teach modular in 2018.)

I’m definitely watching and subscribing to this one, though – and this first video gives me an idea… excuse me, time to load up Pd, Reaktor, and VCV Rack again!

Allen Strange wrote the book on modular synthesizers in the 1970s. Electronic Music: Systems, Techniques, and Controls. Unfortunately since the expanded 1982 edition, it has never been reprinted, and in today’s landscape where more people have access to modular synths than ever before, very few have access to the knowledge contained within. This video series will explore patches both basic and advanced from Strange’s text. Even the simplest patches here yield kernels of knowledge that can be expanded upon in infinite ways. I have been heavily influenced by Strange since long before I became a modular synth educator. Please share this knowledge far and wide. The first video in the series covers one basic and one slightly less basic patch using envelopes.

http://www.makenoisemusic.com

The post Make Noise are turning a classic 1972 synthesis book into a video series appeared first on CDM Create Digital Music.

Accusonus explain how they’re using AI to make tools for musicians

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.

Explore harmonies in your browser with this free arpeggiator

Ever wondered what it would be like if the spirit of Philip Glass inhabited one of your web browser tabs? Well, now he can. Sort of.

“Musical Chord Progression Arpeggiator” is a browser-based, JavaScript-powered harmonic exploration tool. Punch in a chord progression, then a root key and Church mode and go to town. The audio plays back in your browser with some fixed bpm choices.

The real gem here is the array of arpeggiator shapes, which are copious and endlessly amusing. Music! Theory! Nerds! Go!

screenshot_677

But it’s also fun that, this being in the browser, you can click the ‘view’ menu at top and hop into a JavaScript editor. The code is pretty readable, too, even if you’re not an excerpt.

I hope someone actually forks this and adds MIDI export, for instance. And I could imagine this idea of all these arpeggiator shapes in an app, too.

Plus, I get to say hat tip to Brian Eno. (Wait. It’s Eno. Deep bow to Brian Eno.)

Go go go:

https://codepen.io/jakealbaugh/full/qNrZyw/

The post Explore harmonies in your browser with this free arpeggiator appeared first on CDM Create Digital Music.

iZotope Mobius and the crazy fun of Shepard Tones

I always figure the measure of a good plug-in is, you want to tell everyone about it, but you don’t want to tell everyone about it, because then they’ll know about it. iZotope’s Möbius is in that category for me – it’s essentially moving filter effect. And it’s delicious, delicious candy.

iZotope have been on a bit of a tear lately. The company might be best known for mastering and restoration tools, but in 2016, they’ve had a series of stuff you might build new production ideas around. And I keep going to their folder in my sets. There’s the dynamic delay they built – an effect so good that you’ll overlook the fact that the UI is inexplicably washed out. (I just described it to a friend as looking like your license expired and the plug-in was disabled or something. And yet… I think there’s an instance of it on half the stuff I’ve made since I downloaded it.)

More recently, there was also a a plug-in chock full of classic vocal effects.

iZotope Möbius brings an effect largely used in experimental sound design into prime time.

At its core is a perceptual trick called the “Shepard Tone” (named for a guy named Shepard). Like the visual illusion of stripes on a rotating barber pole, the sonic illusion of the Shepard Tone (or the continuously-gliding Shepard–Risset glissando) is such that you perceive endlessly rising motion.

Here, what you should do for your coworkers / family members / whatever is definitely to turn this on and let them listen to it for ten hours. They’ll thank you later, I’m sure.

The Shepard Tone describes synthesis – just producing the sound. The Möbius Filter applies the technique to a resonant filter, so you can process any existing signal.

Musical marketing logic is such that of course you’re then obligated to tell people they’ll want to use this effect for everything, all the time. EDM! Guitars! Vocals! Whether you play the flugelhorn or are the director of a Bulgarian throat signing ensemble, Möbius Filter adds the motion and excitement every track needs!

And, uh, sorry iZotope, but as a result I find the sound samples on the main page kind of unlistenable. Of course, taste is unpredictable, so have a listen. (I guess actually this isn’t a bad example of a riser for EDM so much as me hating those kinds of risers. But then, I like that ten hours of glissandi above, so you probably shouldn’t listen to me.)

https://www.izotope.com/en/products/create-and-design/mobius-filter/sounds.html

Anyway, I love the sound on percussion. Here’s me messing around with that, demonstrating the ability to change direction, resonance, and speed, with stereo spatialization turned on:

The ability to add sync effects (and hocketing, with triplet or dotted rhythms) for me is especially endearing. And while you’ll tire quickly of extreme effects, you can certainly make Möbius Filter rather subtle, by adjusting the filter and mix level.

Möbius Filter is US$49 for most every Mac and Windows plug-in format. A trial version is available.

screenshot_438

https://www.izotope.com/en/products/create-and-design/mobius-filter.html

It’s worth learning more about the Shepard and Risset techniques in general, though – get ready for a very nice rabbit hole to climb down. Surprisingly, the Wikipedia article is a terrific resource:

Shepard tone

If you want to try coding your own shepard tone synthesis, you can do so in the free and open source, multi-platform environment SuperCollider. In fact, SuperCollider is what powered the dizzying musical performance by Marcus Schmickler CDM co-hosted with CTM Festival last month here in Berlin. Here’s a video tutorial that will guide you through the process (though there are lots of ways to accomplish this).

The technique doesn’t stop in synthesis, though. Just as the same basic perceptual trick can be applied to rising visuals and rising sounds, it can also be used in rhythm and tempo – which sounds every bit as crazy as you imagine. Here’s a description of that, with yet more SuperCollider code and a sound example using breaks. Wow.

Risset rhythm – eternal accelerando

Finally, the 1969 rendition of this technique by composer James Tenney is absolutely stunning. I don’t know how Ann felt about this, but it’s titled “For Ann.” (“JAMES! MY EARS!” Okay, maybe not; maybe Ann was into this stuff. It was 1969, after all.) Thanks to Jos Smolders for the tip.

Good times.

So, between Möbius Filter and SuperCollider, you can pretty much annoy anyone. I’m game.

https://supercollider.github.io

The post iZotope Mobius and the crazy fun of Shepard Tones appeared first on cdm createdigitalmusic.

Free Clapping Music App Teaches You Steve Reich – And Rhythm

What’s the sound of one person performing Clapping Music? This.

Before there was Rock Band and Guitar Hero, there was Steve Reich. His 1972 work Clapping Music is a rhythmic etude, and like all compositional etudes, it’s also something you can think of as a “game.”

Any musical score is a graphical representation that’s meant to help you understand something that’s normally heard, not seen. You can use traditional notation – and Clapping Music works well as that.

As an iPhone app, Clapping Music the work has some new tricks. The “score” – the app – can judge your rhythm. Fail to tap accurately, and it’s “game over” – start over and try again. And whereas the composition requires two people, now you can play along with your iPhone. You can also see a different visual representation, one that’s, incidentally, close to those used in some forms of ethnomusicology and that presents time in a more proportional way than classical Western scores do. (That is, whereas engraved scores arrange things to make them look visually neat, but squeezes and expands the representation of time in the process, this form of graphical notation displays time and spacing as one and the same.)

The app also has some extras to learn more about Reich’s music.

clapping

You can thank London-based developer Touchpress for this app and others that explore teaching music through software. They already built The Orchestra, Julliard String Quartet, and Beethoven’s 9th Symphony, which offer interactive tours that let you experience those works in ways something like a book can’t provide. I’m really impressed by their apps – they’re accessible to total newcomers to music, and yet they’re still engaging to someone like me who’s been through years and years of classical music education. That’s no small challenge. And as a Clapping Music fan, it’s fun to see the work in a new way.

Part of why their apps work is that they pair talented developers and designers with the musical experts best able to cover the music. So in this case, we have Steve Reich, researchers from Queen Mary University of London, and the London Sinfonietta.

http://clappingmusicapp.com/

Two feature requests, though (a little unfair, given the app is free, but must be said):
1. Notation view.
2. Onset detection via the mic, so you can actually clap!

Because, oddly, all of this makes me think that maybe the age of apps and screens is the perfect time to rediscover making scores. There’s something really charming about this:

clapping-music

And we live in an era that truly gives us the opportunity for “and” rather than “either/or.”

The post Free Clapping Music App Teaches You Steve Reich – And Rhythm appeared first on Create Digital Music.

Get Inspired with Excerpts of Ableton’s Making Music Book

MakingMusic8

Following our interview with author Dennis DeSantis, we can start your weekend with some sage advice from his book Making Music. While published by Ableton, this isn’t an Ableton book. It lies as the boundary of software and music, at the contact points of creativity in the tool.

For a CDM exclusive excerpt, I wanted to highlight two chapters. One deals with the question of how to overcome default settings – this cries out as almost a public service announcement for people making 120 bpm 4/4 tunes because that’s what pops up when you start a new project in Live and many other DAWs. The other looks at programming drums by grounding patterns in the physical – it’s no accident that Dennis is himself a trained percussionist.

Even if you did land a copy of the printed edition already, this seems a perfect “book club” affair for us to share. Thanks to Dennis and Ableton for making them available; I hope it lights a spark for you and people you know. -Ed.

The Tyranny of the Default

Problem:
Every time you’re inspired to start a new song, you open your DAW and are immediately terrified by the blank project. Maybe you have a simple melody, bass line, or drum part in your head. But in order to hear it, you first have to load the appropriate instruments, set the tempo, maybe connect a MIDI controller, etc. By the time you’ve gotten your DAW to a state where you can actually record, you’ve either lost the motivation or you’ve forgotten your original musical idea.

Because DAWs have to cater to a wide range of users, they are often designed to work out of the box with a collection of default options and a basic screen layout that will be offensive to no one but probably also not optimal for anyone. This inevitably leads to a phenomenon that software developers call “the tyranny of the default”: Since most users will never change their default software options, the seemingly small decisions made by developers may have a profound effect on the way users will experience the software every day.

Here’s how to overcome the tyranny of the default in your own studio.

Solution:
Rather than allowing your DAW to dictate the environment in which you’ll start each track, take the time to build your own default or template project. People often think of templates as blank slates containing a bare minimum of elements, and most default templates provided by DAWs are exactly that; maybe one or two empty tracks, perhaps a return track containing a single effect. But if you regularly start work in a similar way (and even if you don’t), building a template that’s unique to your musical preferences and working style can save you lots of time when you start a new song, allowing you to more quickly capture an initial musical idea from your head into your DAW.

For example, many DAWs set a default tempo of 120 bpm for new projects. If you tend to work in a genre that is generally in a different range of tempos, save yourself time by saving your template with a more appropriate tempo. Additionally, your DAW’s default project likely makes a lot of assumptions about how many output channels you’ll be using (usually two), as well as more esoteric settings like sample rate, bit depth, and even the interface’s color scheme. If you prefer different settings, don’t change them every time you start a new song. Instead, make these changes once and save them in your own template.

Additionally, if you regularly use a particular collection of instruments and/or effects, try pre-loading them into tracks in your DAW and saving them into your template. If you have a go-to sound that you use for sketching out ideas (maybe a sampled piano or a particular preset), preload that preset in your template and even arm the track for recording. This way you can be ready to play and record as soon as the project is loaded.

Some DAWs even allow you to create templates for different types of tracks. For example, if you regularly use a particular combination of effects on each track (such as a compressor and EQ), you could preload these devices—and even customize their parameters—into your default tracks. Then each time you create a new track in any project, you’ll have these effects in place without needing to search through your library of devices.

If you regularly work in a variety of genres, you should consider making multiple templates, each one customized for the different sounds and working methods you prefer. Even if your DAW doesn’t natively support multiple templates, you can still create your own collection; you’ll just need to remember to Save As as soon as you load one, so you don’t accidentally overwrite it.

Some producers, recognizing the value of a highly customized template project, have even started selling templates containing nearly (or even completely) finished songs, with the stated goal that newer producers can use these to learn the production techniques of the pros. If that’s really how you intend to use them, then these are a potentially valuable learning resource. But be careful to avoid just using these as “construction kits” for your own music. This is potentially worse than working from an empty default and is a grey area between original music and paint-by-numbers copying (or worse, outright plagiarism).

Programming Beats 4: Top, Bottom, Left, Right

Problem:
From listening to a lot of music, you have a general understanding of how to program beats that sound similar to those in the music that inspires you. But you don’t really have a sense of how the various drums in a drum kit relate to each other or the way human drummers think when they sit down at the drums and play. As a result, you’re concerned that your programmed beats are either too mechanical sounding or are simply the result of your own interpretation and guesswork about what you hear in other music.

Even if you have no intention of writing “human”-sounding drum parts, it can be helpful to understand some of the physical implications of playing a real drum kit. Here are some ways that drummers approach their instrument.

Solution:
At a philosophical level, a drum kit can be thought of as divided into top and bottom halves. The top half includes all of the cymbals: the hi-hat, ride, crashes, and possibly more esoteric cymbals like splashes, Chinese cymbals, gongs, etc. These are the “top” half for two reasons: They’re both physically higher than the drums, and they also occupy a higher range in the frequency spectrum. In contrast, the bottom half is the drums themselves: the kick, snare, and toms. (The snare is a special case and can be thought of as somewhere in between the top and the bottom in frequency. But for our purposes, let’s consider it part of the bottom group).

Drummers tend to unconsciously approach beat making from either the “top down” or the “bottom up,” depending primarily on genre. Jazz drumming beats, for example, are generally built from the top down, with the ride cymbal pattern being the most important element, followed by the hi-hat (played by the foot). In this context, the kick and snare drum serve to accent or interrupt the pattern which is established by the cymbals. A typical jazz drumming pattern might look like this:

image

In contrast, rock, pop, or R&B drumming beats are built from the bottom up, with the interplay between the kick and the snare comprising the most important layer and the hi-hat or ride cymbal patterns serving as a secondary element. A typical rock drumming pattern might look like this:

image

Note that in both jazz and rock beats, the cymbals generally play simple, repeating patterns, while the kick and snare play gestures that are more asymmetrical. But in jazz, those simple cymbal patterns are fundamental signifiers of the genre. In rock, the cymbal patterns are secondary in importance, while the asymmetrical kick and snare gestures are what define the music.

An awareness of these drumming concepts might give you some things to think about when writing your own electronic drum parts. Are you thinking from the top (cymbals) down, or from the bottom (kick and snare) up? Is the genre you’re working in defined by repeating patterns (such as the steady four-on-the-floor kick drum of house and techno) or by asymmetrical gestures (such as the snare rolls used for buildups in trance)?

In addition to the top/bottom dichotomy, drummers also must make decisions along the left/right axis when determining how a particular pattern is divided between the left and right hands. On a drum kit, some of this is determined by the physical location of the instruments. But for an instrument like a hi-hat that can be reached by either hand, there is often a subtle difference in sound depending on how the pattern is played. For example, consider the following beat:

image

At slow-to-moderate tempos, most drummers would probably play the hi-hat part with one hand, leaving the other free for the snare drum. But once the tempo becomes too fast, it’s no longer possible to play a continuous stream of sixteenth notes with one hand. At this point, many drummers would switch to playing the hi-hat with alternating sticking, each stroke with the opposite hand. But this requires some compromises: Beats two and four require both hands to be playing together, so the player must either move one hand very quickly between the snare and hi-hat or play at least two consecutive hi-hat notes with the same hand. In both cases, there will likely be a slightly different resulting sound. Even the subtle physical differences between two drumsticks can result in a different sound versus when a pattern is played with a single hand.

Of course, none of these physical restrictions apply to the electronic domain by default. There’s no inherent physical speed limit and no need for any notion of “alternating stickings.” At any tempo, consecutive notes can sound completely identical if that’s your intent. But if you’d like to apply some of the sonic characteristics that come about as a result of these human restrictions, you can do so manually. For example, you could try creating a very small change in velocity for every other note in a repeating pattern. Or with a bit more work, you could actually use a slightly different sound for every other note. Some software samplers have a feature called “round robin” that automatically plays a different sample with each key press.

Thinking like a drummer can be a useful exercise when writing beats for any genre—even ones that have no overt relationship to acoustic music at all.

The post Get Inspired with Excerpts of Ableton’s Making Music Book appeared first on Create Digital Music.

A New Book from Ableton Wants to Help You Make Music

abletonbook

Imagine if the Eno/Schmidt Oblique Strategies, a music theory book, and an Ableton quick-start manual all got caught in a transporter accident with a bunch of different music producers.*

That seems to be what you get with Making Music: A Book of Creative Strategies. In one sense, the aim is to be none of these things. It’s not a manual. It’s not a template for music making. It doesn’t, apparently, rely much on musical theory in the traditional sense.

But, then, if you know the man behind it – Dennis DeSantis, a classical percussion virtuoso and composer turned documentation czar – this all makes sense.

The book is divided into the three places where you might become stuck creatively:

1. Beginning
2. Progressing
3. Finishing

And in each section, it includes both problems and solutions, plus hands-on reflections from artists, ranging from experimental to club. (I wish it had sections for “soups” and “desserts,” but this isn’t my book.) Sometimes, it’s talking about specific harmonies in house music. Sometimes, it’s reflecting on the very act of listening.

parallel-harmony-2

In fact, if anything, the whole thing seems a bit like Fux’s Gradus ad Parnassum rewritten, Julia Child The Joy of Cooking style, for anyone frustrated with a blank or overcrowded Ableton Live session display.

But I’m delighted to see it. I can’t imagine myself trying to organize a book in this particular way – we’ll talk to Dennis shortly about how he went about it and offer an excerpt for you to read, if you’re curious. But it seems a marvelous challenge. And it represents the sort of discourse I hope we have more of – one that lies at the intersection of philosophy and creativity and the specific particularities both of musical craft and technological praxis.

pages

makingmusic

A composer in the 18th century had to tackle, simultaneously, the deep meaning of poetry and whether that clarinet player could really easily finger that melody they just wrote. So it shouldn’t seem a conflict of interests when we have to wrangle with a particular detail of automating a plug-in and the grand sweep of the form of the track we’re finishing. The clash between the specific and the profound, and the desperate struggle to actually make something we like, is at the essence of creative process.

If you have specific things you’d like us to ask Dennis about this question, or documentation of music software in general, or cool things he knows about new music on the marimba, let us know.

More info, excerpts:
https://makingmusic.ableton.com/

Note to wise people: has any music software company really done anything like this? I don’t think so. For that matter, I can only think of a handful of books that attempted this sort of scope (though a smattering of this way of thinking has been added in over the years). One advantage of Ableton as patron: you don’t have to convince a publisher this would work.

Obligatory nerd-out: *Okay, think of this as the reverse of the transport accident in Season 1, Episode 5 “The Enemy Within.” In this version, all those parts form some new composite that comes out neatly as a … book. Which is cool. Also, Space Dog. I may be a hopeless nerd, but the advantage of hopeless nerds is we always know where to find weird furry unicorn dogs for you.

The post A New Book from Ableton Wants to Help You Make Music appeared first on Create Digital Music.