Watch an Ableton Live sequence made physical on the monome grid

The monome made history by transforming the virtual world of the computer into a low-fidelity grid of lights and buttons. But it’s no less magical today – especially in the hands of stretta.

Watch:

Matthew Davidson has been an innovative developer of patches for the monome since its early days. And that’s a principle innovation of the hardware: by reducing the “screen” to a minimal on/off grid, and lighting buttons independently from your input, the monome becomes a distillation of the ideas in a particular computer patch. Just like a fretboard or the black and white keys of a grand piano, a music box roll or the notes on a staff, it’s an abstraction of the music itself. And its simplicity is part of its power – a simplicity that a mouse and a high-definition color display lack.

Matthew is using some features the first-generation monome didn’t have – the varibright lights, and a recommended 128-format grid. But otherwise, this riffs on the original idea.

And remember last week when we covered Berkelee College of Music introducing study of electronic instruments? Well, Davidson has developed a whole series of these kind of clever inventions as a set of studies in grid performance.

That is, the choice of Bach is fitting. This is classical grid from a virtuoso, a Well-Tempered Monome if you like.

Check out the full gridlab collection:

https://github.com/stretta/gridlab

Previously:

What do you play? Berklee adds electronic digital instrument program

Updated: so what about other grids?

Via social media, Matthew Davidson elaborates on why this setup requires the monome – which still says a lot about the uniqueness of the monome design:

First up is 64 buttons versus 512. It’ll work on a 128 kinda, barely, but it is awkward. An implementation of a fold mode might make that useable.

Second is the protocol. The monome protocol provides the ability to update a quadrant with a simple, compact message. This is what is used to achieve the fluidity. If you want to update the entire grid of a Launchpad, you have to send 64 individual messages, one for each LED.

Lastly is the issue of MIDI devices and M4L. The monome uses serialosc to communicate. Because of this, a monome M4L device can send and receive MIDI data at the same time as sending a receiving button/led data.

[Reproduced with permission.]

Of course, if you have other DIY ideas here, we’d love to hear them!

The post Watch an Ableton Live sequence made physical on the monome grid appeared first on CDM Create Digital Music.

Yep, you can go virtuoso with ROLI – DiViNCi, Alluxe show you how

You may have met ROLI’s Seaboard and Lightpad Blocks – the squishy performance controllers for computers and mobile. But all these promises about futuristic instruments aside, can you really wail on them? Computer says yes.

Finger drummer virtuoso DiViNCi is an absolute monster on these things. It reminds me of a couple of hyperactive drummer friends I grew up with, rapping on tables, only this actually works as a live performance. And whatever genre you’re into, this proves that if your ideas happen to be, you know – fast ideas – you can make them happen. Watch:

There’s actually a lot going on there, so even more useful than drooling over this performance demo is watching step-by-step as he pulls apart his live setup. He came to the jam without a plan, but … then that means some planning in setup, to make this function well as an all-in-one, one-man-band rig. This involves setting up some keys in advance, and configuring sounds, so that the setup is out of the way and he can lose himself and jam – even literally with his eyes closed.

ROLI’s hardware – for the moment, at least – doesn’t make any sound on its own, so it’s necessary to dig into the ROLI Dashboard to connect the hardware with software. That software in turn got some updates, recently, if you haven’t checked in on it lately.

It’s important to DiViNCi’s set that he combines the talkbox and the Blocks-controlled software instrument. Let’s check in, too, with Laura Escude aka Alluxe, and her “future classical” setup. Laura is someone special, in that she’s not only built a career as a solo musician and electronic instrumentalist, but also as a high-powered teacher and consultant, setting up live shows on the biggest imaginable scale for the likes of Kanye West and others. (She was also just added to the lineup at the next Ableton Loop in her home city LA in the fall, so see you all in California, hopefully!)

That said, it’s really Laura’s own performances that are the most personal. Instead of the ultra-compact Blocks, here she uses the Seaboard RISE keyboard controller – still my personal favorite. (Just squishy enough, more room to play on, but not so big that you can’t tote it around… and unlike the very first Seaboard, not too squishy. Squishy – technical term, hope you’re keeping up.)

She works with Ableton Live to set up sounds so the instrument can work through her setlist and stay expressive as she focuses on other stuff – like singing, for example.

That’s an interesting way of doing it, by the way – so it’s program changes in Live, triggered inside clips, triggered by follow actions. (I’ve been procrastinating doing a story just on how to manage different sounds in Live sets … it’s time.)

Some more resources:

Use Seaboard RISE with Kontakt

Use RISE with Apple Logic Pro and Equator

My Seaboard artist stories

The post Yep, you can go virtuoso with ROLI – DiViNCi, Alluxe show you how appeared first on CDM Create Digital Music.

Creative software can now configure itself for control, with OSC

Wouldn’t it be nice if, instead of manually assigning every knob and parameter, software was smart enough to configure itself? Now, visual software and OSC are making that possible.

Creative tech has been moving forward lately thanks to a new attitude among developers: want something cool? Do it. Open source and/or publish it. Get other people to do it, too. We’ve seen seen that as Ableton Link transformed sync wirelessly across iOS and desktop. And we saw it again as software and hardware makers embraced more expression data with MIDI Polyphonic Expression. It’s a way around “chicken and egg” worries – make your own chickens.

Open Sound Control (OSC) has for years been a way of getting descriptive, high-resolution data around. It’s mostly been used in visual apps and DIY audiovisual creations, with some exceptions – Native Instruments’ Reaktor has a nice implementation on the audio side. But what it was missing was a way to query those descriptive messages.

What would that mean? Well, basically, the idea would be for you to connect a new visual app or audio tool or hardware instrument and interactively navigate and assign parameters and controls.

That can make tools smarter and auto-configuring. Or to put it another way – no more typing in the names of parameters you want to control. (MIDI is moving in a similar direction, if via a very different structure and implementation, with something called MIDI-CI or “Capability Inquiry.” It doesn’t really work the same way, but the basic goal – and, with some work, the end user experience – is more or less the same.)

OSC Queries are something I’ve heard people talk about for almost a decade now. But now we have something real you can use right away. Not only is there a detailed proposal for how to make the idea work, but visual tools VDMX, Mad Mapper, and Mitti all have support now, and there’s an open source implementation for others to follow.

Vidvox (makers of VDMX) have led the way, as they have with a number of open source ideas lately. (See also: a video codec called Hap, and an interoperable shader standard for hardware-accelerated graphics.)

Their implementation is already in a new build of VDMX, their live visuals / audiovisual media software:

https://docs.vidvox.net/vdmx_b8700.html

You can check out the proposal on their site:

https://github.com/vidvox/oscqueryproposal

Plus there’s a whole dump of open source code. Developers on the Mac get a Cocoa framework that’s ready to use, but you’ll find some code examples that could be very easily ported to a platform / language of your choice:

https://github.com/Vidvox/VVOSCQueryProtocol

There’s even an implementation that provides compatibility in apps that support MIDI but don’t support OSC (which is to say, a whole mess of apps). That could also be a choice for hardware and not just software.

They’ve even done this in-progress implementation in a browser (though they say they will make it prettier):

Here’s how it works in practice:

Let’s say you’ve got one application you want to control (like some software running generative visuals for a live show), and then another tool – or a computer with a browser open – connected on the same network. You want the controller tool to map to the visual tool.

Now, the moment you open the right address and port, all the parameters you want in the visual tool just show up automatically, complete with widgets to control them.

And it’s (optionally) bi-directional. If you change your visual patch, the controls update.

In VDMX, for instance, you can browse parameters you want to control in a tool elsewhere (whether that’s someone else’s VDMX rig or MadMapper or something altogether different):

And then you can take the parameters you’ve selected and control them via a client module:

All of this is stored as structured data – JSON files, if you’re curious. But this means you could also save and assign mappings from OSC to MIDI, for instance.

Another example: you could have an Ableton Live file with a bunch of MIDI mappings. Then you could, via experimental code in the archive above, read that ALS file, and have a utility assign all those arbitrary MIDI CC numbers to automatically-queried OSC controls.

Think about that for a second: then your animation software could automatically be assigned to trigger controls in your Live set, or your live music controls could automatically be assigned to generative visuals, or an iPad control surface could automatically map to the music set when you don’t have your hardware controller handy, or… well, a lot of things become possible.

We’ll be watching OSCquery. But this may be of enough interest to developers to facilitate some discussion here on CDM to move things forward.

Follow Vidvox:

https://vdmx.vidvox.net/blog

And previously, watching MIDI get smarter (smarter is better, we think):

MIDI evolves, adding more expressiveness and easier configuration

MIDI Polyphonic Expression is now a thing, with new gear and software

Plus an example of cool things done with VDMX, by artist Lucy Benson:

The post Creative software can now configure itself for control, with OSC appeared first on CDM Create Digital Music.

Free Ableton Live tool lets you control even more arcane hardware

They’re called “NRPN”‘s. It sounds like some covert military code, or your cat walked on your keyboard. But they’re a key way to control certain instruments via MIDI – and now you have a powerful way to do just that in Ableton Live, for free.

NRPN stands for “Non-Registered Parameter Number” in MIDI, which is a fancy way of saying “we have a bunch of extra MIDI messages and no earthly clue how to identify them.” But what that means in practical terms is, many of your favorite synthesizers have powerful features you’d like to control and automate and … you can’t. Ableton Live doesn’t support these messages out of the box.

It’s likely a lot of people own synths that require NRPN messages, even if they’ve never heard of them. The Dave Smith Instruments Prophet series, DSI Tetra, Novation Peak, Roger Linn Linnstrument, and Korg EMX are just a few examples. (Check your manual and you’ll see.)

Now, you could dig into Max for Live and do this by hand. But better than that is to download a powerful free tool that does the hard work for you, via a friendly interface.

Uruguay-born, Brazil based superstar artist and ultra-hacker Gustavo Bravetti has come to our rescue. This is now the second generation version of his free Max for Live device – and it’s got some serious power inside. The original version was already the first programmable NRPN generator for Live; the new edition adds MIDI learn and bidirectional communication.

It’s built in Max 8 with Live 10, so for consistency you’ll likely want to use Live 10 or later. (Max for Live is required, which is also included in Suite.)

Features:

Up to 8 NRPN messages per device
Multiple devices can be stacked
Setup parameters in NRPN or MSB/LSB [that’s “most significant” and “least significant” byte – basically, a method of packing extra data resolution into MIDI by combining two values]
Bidirectional control and visual feedback
Record automation directly from your synthesizer
MIDI Learn function for easy parameter and data size setup
Adjustable data rate and redundancy filters
Configurable MIDI Thru Filter
Easy draw and edit automation with multiple Data Sizes

User guide

Download from Maxforlive.com

https://www.facebook.com/gustavobravettilive/

The post Free Ableton Live tool lets you control even more arcane hardware appeared first on CDM Create Digital Music.

Less Is More With Minimal Synth Setup

Less is more with this synth jam, with a minimal setup….… Read More Less Is More With Minimal Synth Setup

Exploring a journey from Bengali heritage to electronic invention

Can electronic music tell a story about who we are? Debashis Sinha talks about his LP for Establishment, The White Dog, and how everything from Toronto noodle bowls to Bengali field recordings got involved.

The Canadian artist has a unique knack for melding live percussion techniques and electro-acoustic sound with digital manipulation, and in The White Dog, he dives deep into his own Bengali heritage. Just don’t think of “world music.” What emerges is deeply his and composed in a way that’s entirely electro-acoustic in course, not a pastiche of someone else’s musical tradition glued onto some beats. And that’s what drew me to it – this is really the sound of the culture of Debashis, the individual.

And that seems connected to what electronic music production can be – where its relative ease and accessibility can allow us to focus on our own performance technique and a deeper sense of expression. So it’s a great chance not just to explore this album, but what that trip in this work might say to the rest of us.

CDM’s label side project Establishment put out the new release. I spoke to Debashis just after he finished a trip to Germany and a live performance of the album at our event in Berlin. He writes us from his home Toronto.

First, the album:

I want to start with this journey you took across India. What was that experience like? How did you manage to gather research while in that process?

I’ve been to India many times to travel on my own since I turned 18 – usually I spend time with family in and near Kolkata, West Bengal and then travel around, backpacking style. Since the days of Walkman cassette recorders, I’ve always carried something with me to record sound. I didn’t have a real agenda in mind when I started doing it – it was the time of cassettes, really, so in my mind there wasn’t much I could do with these recordings – but it seemed like an important process to undertake. I never really knew what I was going to do with them. I had no knowledge of what sound art was, or radio art, or electroacoustic music. I switched on the recorder when I felt I had to – I just knew I had to collect these sounds, somehow, for me.

As the years went on and I understood the possibilities for using sound captured in the wild on both a conceptual and technical level, and with the advent of tools to use them easily, I found that to my surprise that the act of recording (when in India, at least) didn’t really change. I still felt I was documenting something that was personal and vital to my identity or heart, and the urge to turn on the recorder still came from a very deep place. It could easily have been that I gathered field sound in response to or in order to complete some kind of musical idea, but every time I tried to turn on the recorder in order to gather “assets” for my music, I found myself resisting. So in the end I just let it be, safe in the knowledge that whatever I gathered had a function for me, and may (or may not) in future have a function for my music or sound work. It didn’t feel authentic to gather sound otherwise.

Even though this is your own heritage, I suppose it’s simultaneously something foreign. How did you relate to that, both before and after the trip?

My father moved to Winnipeg, in the center of Canada, almost 60 years ago, and at the time there were next to no Indian (i.e. people from India) there. I grew up knowing all the brown people in the city. It was a different time, and the community was so small, and from all over India and the subcontinent. Passing on art, stories, myth and music was important, but not so much language, and it was easy to feel overwhelmed – I think that passing on of culture operated very differently from family to family, with no overall cultural support at large to bolster that identity for us.

My mom – who used to dance with Uday Shankar’s troupe would corral all the community children to choreograph “dance-dramas” based on Hindu myths. The first wave of Indian people in Winnipeg finally built the first Hindu temple in my childhood – until then we would congregate in people’s basement altars, or in apartment building common rooms.

There was definitely a relationship with India, but it was one that left me what I call “in/between” cultures. I had to find my own way to incorporate my cultural heritage with my life in Canada. For a long time, I had two parallel lives — which seemed to work fine, but when I started getting serious about music it became something I really had to wrestle with. On the one hand, there was this deep and rich musical heritage that I had tenuous connections to. On the other hand, I was also interested in the 2-Tone music of the UK, American hardcore, and experimental music. I took tabla lessons in my youth, as I was interested in and playing drums, but I knew enough to know I would never be a classical player, and had no interest in pursuing that path, understanding even then that my practice would be eclectic.

I did have a desire to contribute to my Indian heritage from where I sat – to express somehow that “in/between”-ness. And the various trips I undertook on my own to India since I was a young person were in part an effort to explore what that expression might take, whether I knew it or not. The collections of field recordings (audio and later video) became a parcel of sound that somehow was a thread to my practice in Canada on the “world music” stage and later in the realms of sound art and composition.

One of the projects I do is a durational improvised concert called “The (X) Music Conference”, which is modeled after the all-night classical music concerts that take place across India. They start in the evening and the headliner usually goes on around 4am and plays for 3 or more hours. Listening to music for that long, and all night, does something to your brain. I wanted to give that experience to audience members, but I’m only one person, so my concert starts at midnight and goes to 7am. There is tea and other snacks, and people can sit or lie down. I wanted to actualize this idea of form (the classical music concert) suffused with my own content (sound improvisations) – it was a way to connect the music culture of India to my own practice. Using field recordings in my solo work is another, or re-presenting/-imagining Hindu myths another.

I think with the development of the various facets of my sound practice, I’ve found a way to incorporate this “form and content” approach, allowing the way that my cultural heritage functions in my psyche to express itself through the tools I use in various ways. It wasn’t an easy process to come to this balance, but along the way I played music with a lot of amazing people that encouraged me in my explorations.

In terms of integrating what you learned, what was the process of applying that material to your work? How did your work change from its usual idioms?

I went through a long process of compartmentalizing when I discovered (and consumer technology supported) producing electroacoustic work easily. When I was concentrating on playing live music with others on the stage, I spent a lot of time studying various drumming traditions under masters all over – Cairo, Athens, NYC, LA, Toronto – and that was really what kept me curious and driven, knowing I was only glimpsing something that was almost unknowable completely.

As the “world music” industry developed, though, I found the “story” of playing music based on these traditions less and less engaging, and the straight folk festival concert format more and more trivial – fun, but trivial – in some ways. I was driven to tell stories with sound in ways that were more satisfying to me, that ran deeper. These field recordings were a way in, and I made my first record with this in mind – Quell. I simply sat down and gathered my ideas and field recordings, and started to work. It was the first time I really sustained an artistic intention all the way through a major project on my own. As I gained facility with my tools, and as I became more educated on what was out there in the world of this kind of sound practice, I found myself seeking these kinds of sound contexts more and more.

However, what I also started to do was eschew my percussion experience. I’m not sure why, but it was a long time before I gave myself permission to introduce more musical and percussion elements into the sound art type of work I was producing. I think in retrospect I was making up rules that I thought applied, in an effort to navigate this new world of sound production – maybe that was what was happening. I think now I’m finding a balance between music, sound, and story that feels good to me. It took a while though.

I’m curious about how you constructed this. You’ve talked a bit about assembling materials over a longer span of time (which is interesting, too, as I know Robert is working the same way). As we come along on this journey of the album, what are we hearing; how did it come together? I know some of it is live… how did you then organize it?

This balance between the various facets of my sound practice is a delicate one, but it’s also driven by instinct, because really, instinct is all I have to depend on. Whereas before I would give myself very strict parameters about how or what I would produce for a given project, now I’m more comfortable drawing from many kinds of sound production practice.

Many of the pieces on “The White Dog” started as small ideas – procedural or mixing explorations. The “Harmonium” pieces were from a remix of the soundtrack to a video art piece I made at the Banff Centre in Canada (White Dog video link here???), where I wanted to make that video piece a kind of club project. “entr’acte” is from a live concert I did with prepared guitar and laptop accompanying the works of Canadian visual artist Clive Holden. Tracks on other records were part of scores for contemporary dance choreographer Peggy Baker (who has been a huge influence on how I make music, speaking of being open). What brought all these pieces together was in a large part instinct, but also a kind of story that I felt was being told. This cross pollination of an implied dramatic thread is important to me.

And there’s some really beautiful range of percussion and the like. What are the sources for the record? How did you layer them?

I’ve quite a collection, and luckily I’ve built that collection through real relationships with the instruments, both technical and emotional/spiritual. They aren’t just cool sounds (although they’re that, too) — but each has a kind of voice that I’ve explored and understood in how I play it. In that regard, it’s pretty clear to me what instrument needs to be played or added as I build a track.

Something new happens when you add a live person playing a real thing inside an electronic environment. It’s something I feel is a deep part of my voice. It’s not the only way to hear a person inside a piece of music, but it;s the way I put myself in my works. I love metallic sounds, and sounds with a lot of sustain, or power. I’m intrigued by how percussion can be a texture as well as a rhythm, so that is something I explore. I’m a huge fan of French percussionist Le Quan Ninh, so the bass-drum-as-tabletop is a big part of my live setup and also my studio setup.

This programmatic element is part of what makes this so compelling to me as a full LP. How has your experience in the theater imprinted on your musical narratives?

My theater work encompasses a wide range of theater practice – from very experimental and small to quite large stages. Usually I do both the sound design and the music, meaning pretty much anything coming out of a speaker from sound effects to music.

My inspiration starts from many non-musical places. That’s mostly, the text/story, but not always — anything could spark a cue, from the set design to the director’s ideas to even how an actor moves. Being open to these elements has made me a better composer, as I often end up reacting to something that someone says or does, and follow a path that ends up in music that I never would have made on my own. It has also made me understand better how to tell stories, or rather maybe how not to – the importance of inviting the audience into the construction of the story and the emotion of it in real time. Making the listener lean forward instead of lean back, if you get me.

This practice of collaborative storytelling of course has impact on my solo work (and vice versa) – it’s made me find a voice that is more rooted in story, in comparison to when I was spending all my time in bands. I think it’s made my work deeper and simpler in many ways — distilled it, maybe — so that the story becomes the main focus. Of course when I say “story” I mean not necessarily an explicit narrative, but something that draws the listener from end to end. This is really what drives the collecting and composition of a group of tracks for me (as well as the tracks themselves) and even my improvisations.

Oh, and on the narrative side – what’s going on with Buddha here, actually, as narrated by the ever Buddha-like Robert Lippok [composer/artist on Raster Media]?

I asked Robert Lippok to record some text for me many years ago, a kind of reimagining the mind of Gautama Buddha under the bodhi tree in the days leading to his enlightenment. I had this idea that maybe what was going through his mind might not have been what we may imagine when we think of the myth itself. I’m not sure where this idea came from – although I’m sure that hearing many different versions of the same myths from various sources while growing up had its effect – but it was something I thought was interesting. I do this often with my works (see above link to Kailash) and again, it’s a way I feel I can contribute to the understanding of my own cultural heritage in a way that is rooted in both my ancestor’s history as well as my own.

And of course, when one thinks of what the Buddha might have sounded like, I defy you to find someone who sounds more perfect than Robert Lippok.

Techno is some kind of undercurrent for this label, maybe not in the strict definition of the genre… I wonder actually if you could talk a bit about pattern and structure. There are these rhythms throughout that are really hypnotic, that regularity seems really important. How do you go about thinking about those musical structures?

The rhythms I seem drawn to run the gamut of time signatures and tempos. Of course, this comes from my studies of various music traditions and repertoire (Arabic, Greek, Turkish, West Asian, south Indian…). As a hand percussionist for many years playing and studying music from various cultures, I found a lot of parallels and cross talk particularly in the rhythms of the material I encountered. I delighted in finding the groove in various tempos and time signatures. There is a certain lilt to any rhythm; if you put your mind and hands to it, the muscles will reveal this lilt. At the same time, the sound material of electronic music I find very satisfying and clear. I’m at best a middling recording engineer, so capturing audio is not my forte – working in the box I find way easier. As I developed skills in programming and sound design, I seemed to be drawn to trying to express the rhythms I’ve encountered in my life with new tools and sounds.

Regularity and grid is important in rhythm – even breaking the grid, or stretching it to its breaking point has a place. (You can hear this very well in south Indian music, among others.) This grid undercurrent is the basis of electronic music and the tools used to make it. The juxtaposition of the human element with various degrees of quantization of electronic sound is something I think I’ll never stop exploring. Even working strongly with a grid has a kind of energy and urgency to it if you’re playing acoustic instruments. There’s a lot to dive into, and I’m planning to work with that idea a lot more for the next release(s).

And where does Alvin Lucier fit in, amidst this Bengali context?

The real interest for me in creating art lies in actualizing ideas, and Lucier is perhaps one of the masters of this – taking an idea of sound and making it real and spellbinding. “Ng Ta (Lucier Mix)” was a piece I started to make with a number of noodle bowls I found in Toronto’s Chinatown – the white ones with blue fishes on them. The (over)tones and rhythms of the piece as it came together reminded me of a piece I’m really interested in performing, “Silver Streetcar for The Orchestra”, a piece for amplified triangle by Lucier. Essentially the musician plays an amplified triangle, muting and playing it in various places for the duration of the piece. It’s an incredible meditation, and to me Ng Ta on The White Dog is a meditation as well – it certainly came together in that way. And so the title.

I wrestle with the degree with which I invoke my cultural heritage in my work. Sometimes it’s very close to the surface, and the work is derived very directly from Hindu myth say, or field recordings from Kolkata. Sometimes it simmers in other ways, and with varying strength. I struggle with allowing it to be expressed instinctually or more directly and with more intent. Ultimately, the music I make is from me, and all those ideas apply whether or not I think of them consciously.

One of the problems I have with the term “world music” is it’s a marketing term to allow the lumping together of basically “music not made by white people”, which is ludicrous (as well as other harsher words that could apply). To that end, the urge to classify my music as “Indian” in some way, while true, can also be a misnomer or an “out” for lazy listening. There are a billion people in India, I believe, and more on the subcontinent and abroad. Why wouldn’t a track like “entr’acte” be “Indian”? On the other hand, why would it? I’m also a product of the west. How can I manage those worlds and expectations and still be authentic? It’s something I work on and think about all the time – but not when I’m actually making music, thank goodness.

I’m curious about your live set, how you were working with the Novation controllers, and how you were looping, etc.

My live sets are always, always constructed differently – I’m horrible that way. I design new effects chains and different ways of using my outboard MIDI gear depending on the context. I might use contact mics on a kalimba and a prepared guitar for one show, and then a bunch of external percussion that I loop and chop live for another, and for another just my voice, and for yet another only field recordings from India. I’ve used Ableton Live to drive a lot of sound installations as well, using follow actions on clips (“any” comes in handy a lot), and I’ve even made some installations that do the same thing with live input (making sure I have a 5 second delay on that input has….been occasionally useful, shall we say).

The concert I put together for The White Dog project is one that I try and keep live as much as possible. It’s important to me to make sure there is room in the set for me to react to the room or the moment of performance – this is generally true for my live shows, but since I’m re-presenting songs that have a life on a record, finding a meaningful space for improv was trickier.

Essentially, I try and have as many physical knobs and faders as possible – either a Novation Launch Control XL or a Behringer BCR2000 [rotary controller], which is a fantastic piece of gear (I know – Behringer?!). I use a Launchpad Mini to launch clips and deal with grid-based effects, and I also have a little Launch Control mapped to the effects parameters and track views or effects I need to see and interact with quickly. Since I’m usually using both hands to play/mix, I always have a Logidy UMI3 to control live looping from a microphone. It’s a 3 button pedal which is luckily built like a tank, considering how many times I’ve dropped it. I program it in various ways depending on the project – for The White Dog concerts with MIDI learn in the Ableton looper to record/overdub, undo and clear button, but the Logidy software allows you to go a lot deeper. I have the option to feed up to 3 effects chains, which I sometimes switch on the fly with dummy clips.

The Max For Live community has been amazing and I often keep some kind of chopper on one of the effect chains, and use the User mode on the Launchpad Mini to punch in and out or alter the length of the loop or whatnot. Sometimes I keep controls for another looper on that grid.

Basically, if you want an overview – I’m triggering clips, and have a live mic that I use for percussion and voice for the looper. I try and keep the mixer in a 1:1 relationship with what’s being played/played back/routed to effects because I’m old school – I find it tricky to do much jumping around when I’m playing live instruments. It’s not the most complicated setup but it gets the job done, and I feel like I’ve struck a balance between electronics and live percussion, at least for this project.

What else are you listening to? Do you find that your musical diet is part of keeping you creative, or is it somehow partly separate?

I jump back and forth – sometimes I listen to tons of music with an ear to try and expand my mind, sometimes just to enjoy myself. Sometimes I stop listening to music just because I’m making a lot on my own. One thing I try to always take care of is my mind. I try to keep it open and curious, and try to always find new ideas to ponder. I am inspired by a lot of different things – paintings, visual art, music, sound art, books – and in general I’m really curious about how people make an idea manifest – science, art, economics, architecture, fashion, it doesn’t matter. Looking into or trying to derive that jump from the mind idea to the actual real life expression of it I find endlessly fascinating and inspiring, even when I’m not totally sure how it might have happened. It’s the guessing that fuels me.

That being said, at the moment I’m listening to lots of things that I feel are percolating some ideas in me for future projects, and most of it coming from digging around the amazing Bandcamp site. Frank Bretschneider turned me on to goat(jp), which is an incredible quartet from Japan with incredible rhythmic and textural muscle. I’ve rediscovered the fun of listening to lots of Stereolab, who always seem to release the same record but still make it sound fresh. Our pal Robert Lippok just released a new record and I am so down with it – he always makes music that straddles the emotional and the electronic, which is something I’m so interested in doing.

I continue to make my way through the catalog of French percussionist Le Quan Ninh, who is an absolute warrior in his solo percussion improvisations. Tanya Tagaq is an incredible singer from Canada – I’m sure many of the people reading this know of her – and her live band, drummer Jean Martin, violinist Jesse Zubot, and choirmaster Christine Duncan, an incredible improv vocalist in her own right are unstoppable. We have a great free music scene in Toronto, and I love so many of the musicians who are active in it, many of them internationally known – Nick Fraser (drummer/composer), Lina Allemano (trumpet), Andrew Downing (cello/composer), Brodie West (sax) – not to mention folks like Sandro Perri and Ryan Driver. They’ve really lit a fire under me to be fierce and in the moment – listening to them is a recurring lesson in what it means to be really punk rock.

Buy and download the album now on Bandcamp.

https://debsinha.bandcamp.com/album/the-white-dog

The post Exploring a journey from Bengali heritage to electronic invention appeared first on CDM Create Digital Music.

Ableton Live 10 Video Tutorials

The videos cover setting up your audio interface, setting up MIDI, configuring inputs and outputs., the user interface, session and arrangement views, Wavetable, Drum Buss, recording, MIDI sequencing, Ableton Link and more. … Read More Ableton Live 10 Video Tutorials

Escape look-alike Ableton Live colors with these free themes

You stare at its interface for hours on end. Why not give your eyes something different to look at? Now Ableton Live 10, too, gets access to custom colors.

Judging by looking over people’s shoulders, a lot of Live users simply don’t know that you can hack into Ableton’s custom theme files and modify things. And so we’re all caught in drab uniformity, with the same color theme – both unoriginal and uninspiring.

Fortunately, we have Berlin native and leading Ableton Live guru and educator Madeleine Bloom to come to our rescue. Madeleine has long made some pleasing variations for Live’s colors. Now she’s got two new sets (with more on the way) for Ableton Live 10. Live 10 can still read your old color modifications, but because of some minor changes to the interface, files made for its new XML-based format will work better. (Ableton also changed the name from “skins” to “themes,” for some reason.)

Free Ableton Live Themes Set #1

Free Ableton Live Themes Set #2 [I spot a naming pattern here]

To install theme, follow this tutorial (for both Live 10 and Live 9 and earlier):

Ableton Live Tutorial: How to install new Skins

And if you think these colors aren’t quite right, Madeleine has also written a tutorial for creating your own themes or making modifications to these:

How to Create Your Own Ableton Live Themes & Free PDF Theming Guide

There’s even a link there to a graphical theme editor for Mac and Windows with previews, in case you don’t like editing XML files.

“But, Peter!” says you, “you’re just now a paid shill for Ableton, trying to force me to upgrade to Live 10 when I don’t need it!”

Why, you’ve just made me spit out some of this lifetime supply of Club-Mate soda that Ableton has delivered to my flat every day, you ungrateful readers! Of course, I can’t imagine why you wouldn’t upgrade to Live 10 — why, it’s The Future of Sound. Oh… wait, actually, that’s Native Instruments’ slogan. Sometimes I forget who I’m shilling for.

Anyway, if you are stuck on the clearly inferior and not-having-an-Echo effect Live 9 or earlier, Madeleine is nice enough to have you covered, too, with a whole bunch of skins for those versions. There are dozens of those, including various from readers:

https://sonicbloom.net/en/?s=ableton+live+skins&submit=Search

And there’s an accompanying guide to making your own skins, as well.

Now, enjoy. I have to go lie down, as I think all this Club-Mate sponsorship has made me feel a bit lightheaded.

You’ll find a ton of resources for Live at Sonic Bloom, the site Madeleine runs. It’s a complete hub for information, which is way better than trying to navigate random YouTube uploads:

https://sonicbloom.net/en/

The post Escape look-alike Ableton Live colors with these free themes appeared first on CDM Create Digital Music.

Pretend you can play and produce drums with this free plug-in

Spitfire’s latest LABS plug-in release is out, with the theme “DRUMS.” Here’s how to get started with it – and why it may make you feel like you magically know how to actually play and properly record an acoustic drum kit.

Okay, apologies – I’m projecting a little. Some of you I know can do both those things. Me, that counts as “not at all,” and “yes, but only in theory, please hire an actual producer.”

But DRUMS packs an enormous amount of nuance into a deceptively simple, two-octave mapping. Ever had a chocolate sundae and said, you know, I’m really kind of about the cherry and this bit of peanuts covered in chocolate most? You get the feeling that that’s what’s in this pack.

Here’s a sample. This is literally just me mucking around on the keys. (I ran the sound through the Arturia TridA-Pre, from Arturia’s 3 Preamps You’ll Actually Use set, just to add some dynamics.)

Ready to get started? Here’s where to begin.

Get going with LABS – don’t fear the app!

If you missed our first story on LABS, we covered its launch, which came with a lovely soft piano and chamber string ensemble through vintage mic:

LABS is a free series of sound tools for everyone, and you’ll want it now

Your first step is to head to the LABS site, and choose the free sound you want. If you created a login before at Spitfire, that will work for “DRUMS” – just click ‘get’ and login. If you haven’t got a login yet, you can register with an email address and password.

https://www.spitfireaudio.com/labs/

I find two things scare people about free software, and I understand your frustration, so to allay those fears:

They’re not signing you up for a newsletter, unless you want one!

Some useful assistance, not annoying intrusion. The app is only there to aid in downloading. It doesn’t launch at startup or anything like that. Basically, it’s there because it’s better than your Web browser – it will actually put the files in the right place and let you choose where those hundreds of megs go, and it will finish a download if interrupted. (That’s especially useful on a slow connection.)

Specifically on Windows, you can make sure it finds your correct VST folder so you don’t load up your DAW and wonder where the heck it went.

Grabbing the app helps make sure you complete the download, and that it goes into the right place. The app downloads and installs the content in one step. It doesn’t load on startup or do anything else weird.

Another key feature of the Spitfire app – you can select where the sample content goes, so you can use an external drive if you’re short on space on your internal drive.

Give it a play!

Once LABS is installed, you have your drum kit, which Spitfire says is the creation of drummer Oliver Waton and engineer Stanley Gabriel.

That minimal interface shouldn’t worry you – have a fiddle with the controls and dial in whatever variation you like. Most of the nuance to the LABS kits is really in actually playing them, so the best idea here is to connect your favorite velocity-sensitive instrument and play, whether that’s a drum pad controller or keyboard or whatever else you have handy.

In my case, I wirelessly paired a ROLI Seaboard Block. It’s conveniently also two octaves, so you just need to set the octave range to match the DRUMS.

As opposed to sprawling sample libraries, LABS are simple and compact, so don’t worry – just go ahead and play.

Beginning some ideas with a familiar sound can also be the basis of doing something a bit radical, because a well-recorded acoustic source will give you a rich sonic range – and dares you to make it sound like something else. So, using another bit of free add-on we’ve covered lately, I loaded up the Creative Extensions Pack from Ableton, which works in Live Suite 10 or any copy of Live 10 with a Max for Live license.

To bend this into experimental/IDM territory, I stacked on various effects, including reversing and gating the sound and adding spectral ambience … generally mucking about. The idea was to keep the character of the drum source, but make it sound like spacetime had gone a bit amiss.

Pairing conventional sounds with out-there effects is one way to go. Ableton Live 10 users can grab another freebie (for Suite or Max for Live). Choose Creative Extensions from the browser and download.

And here is a not terribly-well-thought-out effects chain using those Creative Extensions. Could your cat do better? Possibly. I like cats, though. Give those felines some production opportunities, too.

This time I finished off the sound using Native Instruments’ VC 76 compressor and Enhanced EQ.

But I was just having a bit of fun. So I’d love to hear what you come up with using these sorts of sounds. One of the common complaints about production today is that everyone has easy access to sounds and very often the same tools. But let’s use that – let’s see what you all come up with.

If you’re interested in learning more about how to better record drums, I’m happy to ask Spitfire about how they recorded this set, too. Playing with it actually does make me want to grab some mics and a kit, too.

Feel free to post thoughts, questions, and sound links in comments.

The post Pretend you can play and produce drums with this free plug-in appeared first on CDM Create Digital Music.

Arturia’s KeyLab MKII: a more metal, more connected keyboard controller

Oh, look, a new MIDI controller keyboard ranks there with “wow, a new moderately-priced mid-sized sedan.” But… Arturia may have a hit on their hands with the MKII KeyLab. Here’s why.

While everyone else guns for the elusive entry level “everyone,” Arturia has won over specific bands of enthusiasts. The BeatStep Pro is a prime example: by connecting to both MIDI and control voltage, these compact pad-sequencer units have become utterly ubiquitous in modular rigs. They’re the devices that prevent modular performances from turning into aimless noodling. (Well, or at least they give your aimless noodling a set of predictable patterns and rhythm.)

Now, is the modular market big enough to sell the majority of BeatSteps Pro? Probably not. But the agnostic design approach here makes this a multitasker tool in every kitchen, and so word of mouth spreads.

So, keyboards. Native Instruments, love them or hate them, have had a pretty big hit with the Komplete Kontrol line, partly because they do less. They’re elegant looking, they’re not overcrowded, and their encoders let you access not only NI’s software, but lots of other plug-ins via the NKS format.

But the KeyLab MKII looks like it could fit a different niche, by connecting easily to hardware and DAWs.

Backlit pads. 4×4 pads (with velocity and continuous pressure – good), which can also be assigned to chords in case finger drumming isn’t what you had in mind.

DAW control. A lot of people record/edit while playing in parts on the keyboard. So here’s your DAW control layout with some handy shortcut buttons.

Faders/mixing. You get 9 faders with 9 rotaries – so that can be 8 channels plus a master fader. There are assignable buttons underneath those.

Pitch and mod wheels. Dear Arturia: thank you for not being innovative here, as wheels are what many people prefer.

And a big navigator. This bit lets you pull up existing presets.

Okay, none of that is all that exciting – we’ve literally seen exactly this set of features before. But Arturia have pulled it together in some nice ways, like adding a dedicated switch to move into chord mode, letting you change MIDI channel with a button on the front panel (hello, hardware owners), and even thoughtfully including not only those shortcut keys for DAWs, but a magnetic overlay to access them.

Still, keyboards from Nektar and M-Audio, to name just two, cover similar ground. So where Arturia set themselves apart is connectivity.

Class-compliant USB MIDI operation. No drivers mean you can pair this with anything, including iOS and Android and Linux (including Raspberry Pi).

Control Voltage. 4 CV/Gate outputs, controlling pitch, gate, and modulation. Yes, four. Also one CV input.

MIDI in and out.

Pedals. Expression, sustain, and 3 assignable auxiliary pedal inputs.

Software integration. This is obviously a winner if you’re into Arturia’s Analog Collection library, which has gone from varied and pretty okay to really, really great as it’s matured. And since there are so many instruments, having this hardware to navigate them is a godsend. There’s also the obligatory software bundle to sweeten the pot, but I suspect the real draw here is out-of-box compatibility with the DAW of your choice – including Pro Tools, Logic Pro X, FL Studio, Bitwig, Cubase, Ableton Live, Digital Performer, and Studio One.

Made of metal. Okay, not the keys. (That’d be awesome, if… wrong.) But the chassis is aluminum, and the wheels are event metal.

There’s a pretty nice piano and a bunch of analog presets built in here, making this a good deal.

I think if your workflow isn’t tied to Native Instruments software and plug-ins, the connectivity and standalone operation here could make the Arturia the one to beat. The thing to check, obviously, is hardware and build quality, though note that Arturia say the keybed at least is what’s found on the Brute line.

There are 49- and 61- key variations, and they come in either black or white, so you can, you know, coordinate with your studio and tastes.

Video, of course:

Arturia KeyLab MKII

The post Arturia’s KeyLab MKII: a more metal, more connected keyboard controller appeared first on CDM Create Digital Music.