You can make music with test equipment – Hainbach explains

Before modulars became a product, some of the first electronic synthesis experiments made use of test equipment – gear intended to make sound, but not necessarily musically. And now that approach is making a comeback.

Hainbach, the Berlin-based experimental artist, has been helping this time-tested approach to sound reach new audiences.

I actually have never seen a complete, satisfying explanation of the relationship of abstract synthesis, as developed by engineers and composers, to test gear. Maybe it’s not even possible to separate the two. But suffice to say, early in the development of synthesis, you could pick up a piece of gear intended for calibration and testing of telecommunications and audio systems, and use it to make noise.

Why the heck would you do that now, given the availability of so many options for synthesis? Well, for one – until folks like Hainbach and me make a bunch of people search the used market – a lot of this gear is simply being scrapped. Since it’s heavy and bulky, it ranges from cheap to “if you get this out of my garage, you can have it” pricing. And the sound quality of a lot of it is also exceptional. Sold to big industry back in a time when slicing prices of this sort of equipment wasn’t essential, a lot of it feels and sounds great. And just like any other sound design or composition exercise that begins with finding something unexpected, the strange wonderfulness of these devices can inspire.

I got a chance to play a few days with the Waveform Research Centre in Rotterdam’s WORM, a strange and wild collection of these orphaned devices lovingly curated by Dennis Verschoor. And I got sounds unlike anything I was used to. It wasn’t just the devices and their lovely dials that made that possible – it was also the unique approach required when the normal envelope generators and such aren’t available. Human creativity does tend to respond well to obstacles.

Whether or not you go that route, it is worth delving into the history and possibilities – and Hainbach’s video is a great start. It might at the very least change how you approach your next Reaktor patch, SuperCollider code, synth preset, or Eurorack rig.

Previously:

Immerse yourself in Rotterdam’s sonic voltages, in the WORM laboratory

The post You can make music with test equipment – Hainbach explains appeared first on CDM Create Digital Music.

An injury left Olafur Arnalds unable to play, so he turned to machines

Following nerve damage, Icelandic composer/producer/musician was unable to play the piano. With his ‘Ghost Pianos’, he gets that ability back, through intelligent custom software and mechanical pianos.

It’s moving to hear him tell the story (to the CNN viral video series) – with, naturally, the obligatory shots of Icelandic mountains and close-up images of mechanical pianos working. No complaints:

This frames accessibility in terms any of us can understand. Our bodies are fragile, and indeed piano history is replete with musicians who lost the original use of their two hands and had to adapt. Here, an accident caused him to lose left hand dexterity, so he needed a way to connect one hand to more parts.

And in the end, as so often is the case with accessibility stories and music technology, he created something that was more than what he had before.

With all the focus on machine learning, a lot of generative algorithmic music continues to work more traditionally. That appears to be the case here – the software analyzes incoming streams and follows rules and music theory to accompany the work. (As I learn more about machine learning, though, I suspect the combination of these newer techniques with the older ones may slowly yield even sharper algorithms – and challenge us to hone our own compositional focus and thinking.)

I’ll try to reach out to the developers, but meanwhile it’s fun squinting at screenshots as you can tell a lot. There’s a polyphonic step sequencer / pattern sequencer of sorts in there, with some variable chance. You can also tell in the screen shots that the pattern lengths are set to be irregular, so that you get these lovely polymetric echoes of what Olafur is playing.

Of course, what makes this most interesting is that Olafur responds to that machine – human echoes of the ‘ghost.’ I’m struck by how even a simple input can do this for you – like even a basic delay and feedback. We humans are extraordinarily sensitive to context and feedback.

The music itself is quite simple – familiar minimalist elements. If that isn’t your thing, you should definitely keep watching so you get to his trash punk stage. But it won’t surprise you at all that this is a guy who plays Clapping Music backstage – there’s some serious Reich influence.

You can hear the ‘ghost’ elements in the reent release ‘ekki hugsa’, which comes with some lovely joyful dancing in the music video:

re:member debuted the software:

There is a history here of adapting composition to injury. (That’s not even including Robert Schumann, who evidently destroyed his own hands in an attempt to increase dexterity.)

Paul Wittgenstein had his entire right arm amputated following World War I injury, commissioned a number of works for just the left hand. (There’s a surprisingly extensive article on Wikipedia, which definitely retrieves more than I had lying around inside my brain.) Ravel’s Piano Concerto for the Left Hand is probably the best-known result, and there’s even a 1937 recording by Wittgenstein himself. It’s an ominous, brooding performance, made as Europe was plunging itself into violence a second time. But it’s notable in that it’s made even more virtuosic in the single hand – it’s a new kind of piano idiom, made for this unique scenario.

I love Arnalds’ work, but listening to the Ravel – a composer known as whimsical, crowd pleasing even – I do lament a bit of what’s been lost in the push for cheery, comfortable concert music. It seems to me that some of that dark and edge could come back to the music, and the circumstances of the composition in that piece ought to remind us how necessary those emotions are to our society.

I don’t say that to diss Mr. Arnalds. On the contrary, I would love to hear some of his punk side return. And his quite beautiful music aside, I also hope that these ideas about harnessing machines in concert music may also find new, punk, even discomforting conceptions among some readers here.

Here’s a more intimate performance, including a day without Internet:

And lastly, more detail on the software:

Meanwhile, whatever kind of music you make, you should endeavor to have a promo site that is complete, like this – also, sheet music!

olafurarnalds.com

Previously:

The KellyCaster reveals what accessibility means for instruments

The post An injury left Olafur Arnalds unable to play, so he turned to machines appeared first on CDM Create Digital Music.

In Adversarial Feelings, Lorem explores AI’s emotional undercurrents

In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.

Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.

And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:

Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.

Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.

And what about achieving intimacy with nets? He explains:

Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.

I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.

So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?

Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.

On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.

What data were you training on for the musical patterns?

MIDI – basically I trained the NN on patterns I create.

And wait, SysEx, what? What were you doing with that?

Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.

What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?

I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…

How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?

To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.

What was the role of instrument builders – what are we hearing in the sound, then?

Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.

I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?

I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.

You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?

It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.

The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?

I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.

More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.

What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?

I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.

Live in Berlin, 7 June

Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.

KEYS: Artificial Intelligence | Lexachast • Lorem • N1L & more [Facebook event]

Lorem project lives here:

http://www.studio-frames.com

The post In Adversarial Feelings, Lorem explores AI’s emotional undercurrents appeared first on CDM Create Digital Music.

Someone built a strangely accurate Berghain in Minecraft

From Garderobe to dark rooms to toilets to dance floors, jbkrauss has lovingly built a Minecraft recreation of Be– uh, I really don’t want this to be taken down. Of some Berlin club. Looks like Tresor, probably.

Anyway, this strangely Tresor-ish Berlin club sure does, let’s say, lend itself to the cubic block architecture of Minecraft. (Always said that place was really the Borg cube, on so many levels.) Watch:

ceiling is quite high

No doubt it is.

No Halle, but you do get an Eisbar. Erm, sorry – this is definitely not that club. Some club that has something up some stairs. Maybe it’s fourbar at Tresor. Yes.

I have no doubt that when we’re all stuck in an old age home, we will be visiting techno festivals and clubs inside some sort of virtual reality, whether it’s this in Berlin, or a VR Movement Festival, or MUTEK from our retirement home. Here’s our future. So we better start mining materials.

Source: posted by the creator to the techno subreddit today.

The post Someone built a strangely accurate Berghain in Minecraft appeared first on CDM Create Digital Music.

Playdate is an indie game handheld with a crank from Teenage Engineering, Panic

Playdate is a Game Boy-ish gaming handheld with a hand crank on it, wired for delivering indie and experimental games weekly. And it comes from an unlikely collaboration: Mac/iOS developer Panic with synth maker Teenage Engineering.

Yes, that svelte retro industrial look and unmistakable hand crank are the influence of prolific Swedish game house Teenage Engineering. And TE have already demonstrated their love of cranks on their synths, the OP-1 and OP-Z.

This isn’t a Teenage Engineering product, though – and here’s the even more surprising part. The handheld hardware comes from Panic, the long-time Mac and iOS developer. I’ve been a Panic owner over the years, having used their FTP and Web dev products early on in CDM’s life, as did a couple of my designers, and even messing around with Mac icons obsessively back in the day.

But now Panic are doing games – the spooky Wyoming mystery Firewatch, which has earned them some real street cred, and an upcoming thing with a goose.

The really interesting twist here is that the “Playdate” title is a reference to games that appear weekly. And this is where I might imagine this whole thing dovetailing with music. I mean, first, music and indie games naturally go hand in hand, and from the very start of CDM, the game community have been into strange music stuff.

The obvious crossover at some point would be some unusual music games and without question some kind of music creation tool – like nanoloop or LittleGPTracker. nanoloop got its own handheld iteration recently – see below – but this would be a natural hardware platform too.

Even barring that, though, I imagine some dovetailing audiences for this. And it does look cute.

Specs:
400×240 (that’s way more resolution than the original Game Boy), black and white screen
No backlight (okay, so kind of a pain for handheld chip music performance)
Built-in speaker (a little one)
D-pad, A and B switches
USB-C connector
… and it looks like there is a headphone jack

Not sure what the buttons on top and next to the display do – power and lock, maybe?

Involved game designers are tantalizing, too – and have some interesting music connections:

Keita Takahashi (Katamari Damacy)

Zach Gage (SpellTower, Ridiculous Fishing)

Bennett Foddy (QWOP, Getting Over It with Bennett Foddy, and – music lead again, he was the bassist in Cut Copy, remember them?)

Shaun Inman (also a game composer, as well as a designer of Retro Game Crunch, The Last Rocket, Flip’s Escape, etc.)

This takes me back to that one time I hosted a one-button game exhibition at GDC (the game developer conference) with Kokoromi, the Montreal game collective. That has accessibility implications, too, including for music. (Flashback to their game showcase at the same time.) So there is crossover here, I mean – and intersecting interests between composers and game designers, too.

US$149 will buy you the console and a 12 game subscription. Coming early 2020.

Music connections or no, it looks like a toy we’ll want to have.

https://play.date/

EDGE, the print mag, has an exclusive – with an excerpt of that feature online:

https://play.date/edge/

Thanks to Oliver Chesler for the tip.

Obvious marketing campaign, though only for Panic wanting to market to Americans of my age or so…

The post Playdate is an indie game handheld with a crank from Teenage Engineering, Panic appeared first on CDM Create Digital Music.

Gorgeous electro-acoustic instruments mix sculpture and noise

Forget analog pedals or digital boxes – 10cars have made a series of electro-acoustic inventions covered in wires and springs. And they sound wild and strange.

“10cars” is a Berlin-based multimedia artist. He presented these works at the mighty trade show Superbooth, but these pieces are something else – part sculpture, part experimental noise instrument. And they’re one of the more compelling inventions to appear this month.

The lovingly handcrafted pieces meld collage with wires and springs and metal grates, as if someone were making a mouse trap and got distracted and crossed it with a kalimba and a spring reverb. These pieces are dubbed “autumn soundboxes” and range in price from 120 to 360 euros – yes, you can have your own.

10cars is part of the Liquid Sky collective (which now spans Berlin and other bits of Europe, ringleader Ingmar Koch having fled to Portugal). Liquid Sky have made some sound demos to give you a sense of what these are about.

Really lovely stuff.

You get plinks and plonks, otherworldly hums like lost Communist-era student sci film soundtracks or possibly what college radio sounds like on the planet Venus. There’s humming and creepy metallic bits and spacey madness. Well, listen:

Unrelated to anything, but I love that SoundCloud suggested this track when I was playing the sound demos.

More information (for real), plus an email address through which you can order:


lovely experimental noisemachines: 10cars “autumn soundboxes” – available 3rd week may 2019
[liquid sky]

The post Gorgeous electro-acoustic instruments mix sculpture and noise appeared first on CDM Create Digital Music.

KORG’s nutekt NTS-1 is a fun, little kit – and open to ‘logue developers

KORG has already shown that opening up oscillators and effects to developers can expand their minilogue and prologue keyboards. But now they’re doing the same for the nutekt NTS-1 – a cute little volca-ish kit for synths and effects. Build it, make wild sounds, and … run future stuff on it, too.

Okay, first – even before you get to any of that, the NTS-1 is stupidly cool. It’s a little DIY kit you can snap together without any soldering. And it’s got a fun analog/digital architecture with oscillators, filter, envelope, arpeggiator, and effects.

Basically, if you imagine having a palm-sized, battery-powered synthesis studio, this is that.

Japan has already had access to the Nutekt brand from KORG, a DIY kit line. (Yeah, the rest of the world gets to be jealous of Japan again.) This is the first – and hopefully not the last – time KORG has opened up that brand name to the international scene.

And the NTS-1 is one we’re all going to want to get our hands on, I’ll bet. It’s full of features:

– 4 fixed oscillators (saw, triangle and square, loosely modeled around their analog counterpart in minilogue/prologue, and VPM, a simplified version of the multi-engine VPM oscillator)
– Multimode analog modeled filter with 2/4 pole modes (LP, BP, HP)
– Analog modeled amp. EG with ADSR (fixed DS), AHR, AR and looping AR
– modulation, delay and reverb effects on par with minilogue xd/prologue (subset of)
– arpeggiator with various modes: up, down, up-down, down-up, converge, diverge, conv-div, div-conv, random, stochastic (volca modular style). Chord selection: octaves, major triad, suspended triad, augmented triad, minor triad, diminished triad (since sensor only allows one note at a time). Pattern length: 1-24
– Also: pitch/Shape LFO, Cutoff sweeps, tremollo
– MIDI IN via 2.5mm adapter, USB-MIDI, SYNC in/out
– Audio input with multiple routing options and trim
– Internal speaker and headphone out

That would be fun enough, and we could stop here. But the NTS-1 is also built on the same developer board for the KORG minilogue and prologue keyboards. That SDK opens up developers’ powers to make their own oscillators, effects, and other ideas for KORG hardware. And it’s a big deal the cute little NTS-1 is now part of that picture, not just the (very nice) larger keyboards. I’d see it this way:

NTS-1 buyers can get access to the same custom effects and synths as if they bought the minilogue or prologue.

minilogue and prologue owners get another toy they can use – all three of them supporting new stuff.

Developers can use this inexpensive kit to start developing, and don’t have to buy a prologue or minilogue. (Hey, we’ve got to earn some cash first so we can go buy the other keyboard! Oh yeah I guess I have also rent and food and things to think about, too.)

And maybe most of all –

Developers have an even bigger market for the stuff they create.

This is still a prototype, so we’ll have to wait, and no definite details on pricing and availability.

Waiting.

Yep, still waiting.

Wow, I really want this thing, actually. Hope this wait isn’t long.

I’m in touch with KORG and the analog team’s extraordinary Etienne about the project, so stay tuned. For an understanding of the dev board itself (back when it was much less fun – just a board and no case or fun features):

KORG are about to unveil their DIY Prologue boards for synth hacking

Videos:

Sounds and stuff –

Interviews and demos –

And if you wondered what the Japanese kits are like – here you go:

Oh, and I’ll also say – the dev platform is working. Sinevibes‘ Artemiy Pavlov was on-hand to show off the amazing stuff he’s doing with oscillators for the KORG ‘logues. They sound the business, covering a rich range of wavetable and modeling goodness – and quickly made me want a ‘logue, which of course is the whole point. But he seems happy with this as a business, which demonstrates that we really are entering new eras of collaboration and creativity in hardware instruments. And that’s great. Artemiy, since I had almost zero time this month, I better come just hang out in Ukraine for extended nerd time minus distractions.

Artemiy is happily making sounds as colorful as that jacket. Check sinevibes.com.

The post KORG’s nutekt NTS-1 is a fun, little kit – and open to ‘logue developers appeared first on CDM Create Digital Music.

One big, open standalone grid for playing everything: dadamachines composer pro

Various devices have tried to do what the computer does – letting you play, sequence, and clock other instruments, and arrange and recall ideas. Now, a new grid is in town, and it’s bigger, more capable, truly standalone, and open in every way.

composer pro makes its debut today at Superbooth. It comes from what may seem an unexpected source – dadamachines, the small Berlin-based maker known for making a plug-in-play toolkit for robotic percussion and, more recently, a clever developer board. But there’s serious engineering and musical experience informing the project.

What you get is an enormous, colored grid with triggers and display, and connectivity – wired and wireless – to other hardware. From this one device, you can then compose, connect, and perform. It’s a sequencer for outboard gear, but it’s also capable of playing internal sounds and effects.

It’s a MIDI router, a USB host, a sampler and standalone instrument, and a hub to clock everything else. It doesn’t need a computer – and yeah, it can definitely replace that laptop if you want, or keep it connected and synced via cable or Ableton Link.

And one more thing – while big manufacturers are starting to wake up to this sort of thing being a product category, composer pro is also open source and oriented toward communities of small makers and patchers who have been working on this problem. So out of the box, it’s set up to play Pure Data, SuperCollider, and other DIY instruments and effects, extending ideas for standalone instrument/effects developed by the likes of monome and Critter & Guitari’s Organelle. That should be significant both if you’re that sort of builder/hacker/patcher yourself, or even if you just want to benefit from their creations in your own music. And it’s in contrast to the proprietary direction most hardware has gone in recent years. It’s open to ideas and to working together on how to play – which is how grid performance got started in the first place.

Disclosure: I’m working with dadamachines as an advisor/coach. That also means I’ll be responsible for getting feedback to them – curious what you think. (And yeah, I also have some ideas and desires for where these sorts of capabilities could lead in the future. As a lot of you have, I’ve dreamt of electronic musical performance tools moving in this direction – I love computers but also hate some of the struggles they’ve brought with them.)

The hardware

I hope you like buttons. Composer Pro has a 192-pad grid – that’s 16 horizontally by 12 vertically. Add in the rest of the triggers for a grad total of 261 buttons – transport and modes on the top, and the usual scene and arm triggers on the side, plus edit controls and other functions on the left.

For continuous control, there’s a touch strip. And you get a small display and encoder so you can navigate and see what you’re doing.

There’s computational power inside, too – a Raspberry Pi compute module, and additional processing power that runs the device.

Connections

You get just about every form of connectivity (apart from CV/gate, even though this is Superbooth):

Sequencing and clock:

MIDI (via standard DIN connectors, 2 in, 2 out)
DIN sync (for vintage analog gear like the Roland TR-808)
Analog sync I/O (for other analog gear and modular)
USB MIDI (via USB C, for a computer)
USB host, with a 4-port USB hub
Ableton Link (for wireless connections, including to various Mac, Windows, Linux, and iOS software)
Footswitch jack

(There’s a dongle for wifi for Link support.)

Audio:

Headphone jack
Stereo audio in
Stereo audio out

The USB host and 4-port hub is a really big deal. It means you can do the things that normally require a computer – connect other interfaces, add more audio I/O, add USB MIDI keyboards and controllers, whatever.

Sequences and songs

At its heart, composer pro focuses on sequencing – whether you want to work with custom internal instruments, external gear, or both.

You have sixteen slots, which dadamachines dubs “Machines.” Then, you can work with simple step-sequenced rhythms or mono-/polyphonic melodies, and add automation of parameters (via MIDI CC).

Pattern sequences can be up to 16 bars.

There are 12 patterns per Machine slot. (16×12 – get it?)

Patterns + Machines = larger songs. And you can have as many songs as you can fit on an SD card (which, given this is MIDI data is … a lot).

The beauty of dadamachines’ approach is, by building this around the grid, you can work in a lot of different ways:

Step-sequence melodies and rhythms in a standard grid view.

Play live – there’s even a MIDI looper – and use standard quantization tools, or not, to decide how much you want your performance to be on the grid.

Trigger patterns one at a time, or in scenes.

Use the touchstrip for additional live control, with beat repeat functions, polyrhythmic loop length, nudge, and velocity (the pads aren’t velocity sensitive, though you can also use an external controller with velocity).

Now you see the logic behind having this enormous 16×12 grid – everything is visible at once. Most hardware, and even devices like Ableton Push, require you to navigate around to individual parts; there’s no way to see the overall sequence. You can bring up dedicated grid pages if you want to focus on playing a particular part or editing a sequence. But there’s an overview page so you also get the big picture – and trigger everything, without menu diving.

dadamachines have set up four views:

Song View – think interactive set list

Scene View – all your available Patterns and Machines

Machine View – focus on one particular instrument and input

Performance View – transform an existing pattern without changing it

And remember, this can be both external gear and internal instruments – with some nice ready-to-play instruments included in the package, or the ability to make your own (in Pd and SuperCollider) if you’re more advanced.

It’s already set up to work with ORAC, the powerful instrument by technobear, featured on the Organelle from Critter & Guitari:

– showing what can happen as devices are open, collaborative, and compatible.

When can you get this?

composer pro is being shown in a fully working – very nice looking – prototype. That also means a chance to get more feedback from musicians.

dadamachines say they plan to put this on sale in late summer.

It’s an amazing accomplishment from an engineering standpoint, from the hands-on time I’ve had with it. I know velocity-sensitive pads will be a disappointment, but I think that also means you’ll be able to afford this and get hardware that’s reliable – and you can always use the touchstrip or connect other hardware for expression.

It also goes beyond what sequencers like the just-announced Pioneer Squid can do, and offers a more intuitive interface than a lot of other boutique offerings – and its openness could support a community exploring ideas. That’s what originally launched grid performance in the first place with monome, but got lost as monome quantities were limited and commercial manufacturers chose to take a proprietary approach.

Stay tuned to CDM as this evolves.

https://dadamachines.com/products/composer-pro/

Press release:

dadamachines announces grid based midi performance sequencer composer pro

composer pro is the new hub for electronic musicians, a missing link for sketching ideas and playing live. It’s a standalone sampler and live instrument, and connects to everything in a studio or onstage, for clock and patterns. And it’s open source and community-powered, ensuring it’s only getting started.

Edit patterns by step, play live on the pads and touch strip, use external controllers – it’s your choice. Sequence and clock external gear, or work with onboard instruments. Clock your whole studio or stage full of gear – and sync via wires or wirelessly.

Finally, there’s a portable device that gives you the control you need, and the big picture on your ideas, while connecting the instruments you want to play. And yes, you’re free to leave the computer at home.

composer pro will be shown to the public the first time at superbooth in Berlin from 9-11th of may. Sales start is planned for late summer 2019.

Play:

Use a massive, RGB, 16×12 grid of pads
192 triggers – 261 buttons in total – but organized, clear, and easy
Step sequence or play live
Melodic and rhythmic/drum modes
MIDI looper
Work with quantization or unquantized
Play on the pads or use external controllers
Touch strip for expression, live sequence transformations, note repeat, and more

Stay connected:

MIDI input/output and sync (via USB-C with computer, USB host, and MIDI DIN)
Analog sync (modular, analog gear)
DIN sync support (for vintage instruments like the TR-808)
USB host – with a built-in 4-port hub
Abeton Link support (USB wifi dongle required for wireless use)
Stereo audio in
Stereo audio out
Headphone, footswitch

Onboard sounds and room to grow:

Internal instruments and effects
Powered by open source sound engines, with internal Raspberry Pi computer core
Includes ORAC by technobear, a powerful sequenced sampler
Arrange productions and set lists:
Full automation sequencing (via MIDI CC)
Trigger patterns, scenes, songs
16-measure sequences, 12 scenes per song
Unlimited song storage (restricted only by SD card capacity)

The post One big, open standalone grid for playing everything: dadamachines composer pro appeared first on CDM Create Digital Music.

Do some crazy glitchy vidart window shopping on LA’s new FRGTWN

From Frogtown Los Angeles and the eccentric Detroit Underground label comes a new store full of “Art and Technology” and “aesthetics and identities” – which, in part, translates to these amazing, glitch-tastic video art toys! Look:

Detroit Underground have made a name for themselves not just as a label, but as a platform for video, visuals, and custom hardware – including Eurorack modular. The latest addition is a trendy turntable. But with the launch of the new FRGTWN online shop, DU are bringing us other goodies – pretty stuff to hang on your wall, yes, but also my personal favorite, a whole trove of unique custom mod-able video gear.

While everyone is up to speed on analog audio, DU label head Kero works magic with analog video. So looking into this online shop is a bit like gaining access to his unique toybox.

Check it:

FRITZ DECONTROLLER

BLOOD SUGAR SEX VIDEO PRO MAGIK

BPMC AVE CV

DEAD LANGUAGE V2.2

You’ll also find excellent synths by Ewa Justka and Mute Records – more on Ewa’s creations very soon.

And in case you have any money left after investing it in must-have analog video glitch gear, yes, there’s now a custom Detroit Underground turntable, launching alongside the store. It’s a sexy limited edition, custom designed by Neubau Berlin and optimized for high quality playback (so, not scratching/DJing, but for putting on a nice Detroit Underground record or two):

DUTT-181 SERIES LTD EDITION TURNTABLE

The shop is filled with other very specific nerd hipster items that basically sum up things we crave. An app is coming soon, so you’ll be in even more danger of buying this stuff in drunk and/or lonely moments.

https://frgtwn.com/

The post Do some crazy glitchy vidart window shopping on LA’s new FRGTWN appeared first on CDM Create Digital Music.

Automated techno: Eternal Flow generates dance music for you

Techno, without all those pesky human producers? Petr Serkin’s Eternal Flow is a generative radio station – and even a portable device – able to make endless techno and deep house variations automatically.

You can run a simple version of Eternal Flow right in your browser:

https://eternal-flow.ru/

Recorded sessions are available on a SoundCloud account, as well:

But maybe the most interesting way to run this is in a self-contained portable device. It’s like a never-ending iPod of … well, kind of generic-sounding techno and deep house, depending on mode. Here’s a look at how it works; there’s no voiceover, but you can turn on subtitles for additional explanation:

There are real-world applications here: apart from interesting live performance scenarios, think workout dance music that follows you as you run, for example.

I talked to Moscow-based artist Petr about how this works. (And yeah, he has his own deep house-tinged record label, too.)

“I used to make deep and techno for a long period of time,” he tells us, “so I have some production patterns.” Basically, take those existing patterns, add some randomization, and instead of linear playback, you get material generated over a longer duration with additional variation.

There was more work involved, too. While the first version used one-shot snippets, “later I coded my own synth engine,” Petr tells us. That means the synthesized sounds save on sample space in the mobile version.

It’s important to note this isn’t machine learning – it’s good, old-fashioned generative music. And in fact this is something you could apply to your own work: instead of just keeping loads and loads of fixed patterns for a live set, you can use randomization and other rules to create more variation on the fly, freeing you up to play other parts live or make your recorded music less repetitive.

And this also points to a simple fact: machine learning doesn’t always generate the best results. We’ve had generative music algorithms for many years, which simply produce results based on simple rules. Laurie Spiegel’s ground-breaking Magic Mouse, considered by many to be the first-ever software synth, worked on this concept. So, too, did the Brian Eno – Peter Chilvers “Bloom,” which applied the same notion to ambient generative software and became the first major generative/never-ending iPhone music app.

By contrast, the death metal radio station I talked about last week works well partly because its results sound so raunchy and chaotic. But it doesn’t necessarily suit dance music as well. Just because neural network-based machine learning algorithms are in vogue right now doesn’t mean they will generate convincing musical results.

I suspect that generative music will freely mix these approaches, particularly as developers become more familiar with them.

But from the perspective of a human composer, this is an interesting exercise not necessarily because it puts yourself out of a job, but that it helps you to experiment with thinking about the structures and rules of your own musical ideas.

And, hey, if you’re tired of having to stay in the studio or DJ booth and not get to dance, this could solve that, too.

More:

http://eternal-flow.ru/

Now ‘AI’ takes on writing death metal, country music hits, more

Thanks to new media artist and researcher Helena Nikonole for the tip!

The post Automated techno: Eternal Flow generates dance music for you appeared first on CDM Create Digital Music.