Mike Barclay und seine elektromechanischen Boxen als Drummachine und Klangerzeuger

Elektromechanische BoxElektromechanische Box

Fast wie ein System aus verschiedenen elektromechanischen Boxen im Volca-Stil baut Mike Barclay für seine Performance diverse spezielle Boxen. Die bestehen aus Spiralen und Hämmerchen, die mittels Einstellern klanglich verändert werden können. Dazu kommt ein zentraler Sequencer, der die Boxen jeweils ansteuert. 

Nicht wirklich zum Kauf, aber zur Inspiration höchst anregend zeigt heute Mike Barclay in einem kleinen Video auf seiner Facebook-Präsenz, wie die drei Boxen vom Sequencer gesteuert werden und dabei total mechanisch verschiedene Dinge auslösen. Die linke Box ist mechanisch am aufwendigsten und zeigt einige Metallzungen und einen Stößel, der ein in Schwingung gebrachtes Röhrchen antriggert. Das klingt wie eine Mischung aus Gummiband und Bassdrum.

Elektromechanisch – Box 2

Die zweite Box enthält eine Spirale, die von zwei Stellen aus angeregt wird und „elektrische“ typische Federhall-Manipulationen erzeugt. Das kennen einige vielleicht auch schon von der Microphonic Soundbox, nur dass die hier auch mechanisch-elektrisch angestoßen wird.

Die dritte Variante, Snare?

Die Snare oder auch Hi-Hat wird durch die dritte Elektro-Box repräsentiert. Hier kann man auch den Klang „dumpfer“ machen und bedämpfen. Das Klangprinzip ist ähnlich der ersten Box, jedoch anders und etwas einfacher aufgebaut und seitlich angeordnet. Das alles ist sehr schön gearbeitet und perfekt für eine sichtbare Bühnenperformance gemacht und gedacht.

Es ist vermutlich nicht so, dass daraus ein käufliches Produkt wird. Dennoch sind sie alle mit Stil und optischer Einheit geradezu bereit dafür.

Mehr dazu?

Wer sich das mal ansehen möchte, sollte generell auf Mikes Facebook-Chronik mal nachsehen und dort auch etwas stöbern. Es handelt sich um einen Künstler, weniger um einen „Hersteller“. Es gibt natürlich keine Preise, keine Lieferbarkeit und keine Website dazu, es ist einfach „nur so“ gebaut worden. Herrlich!


Reason 10.3 will improve VST performance – here’s how

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

Machine learning is presented variously as nightmare and panacea, gold rush and dystopia. But a group of artists hacking away at CTM Festival earlier this year did something else with it: they humanized it.

The MusicMakers Hacklab continues our collaboration with CTM Festival, and this winter I co-facilitated the week-long program in Berlin with media artist and researcher Ioann Maria (born in Poland, now in the UK). Ioann has long brought critical speculative imagination to her work (meaning, she gets weird and scary when she has to), as well as being able to wrangle large groups of artists and the chaos the creative process produces. Artists are a mess – as they need to be, sometimes – and Ioann can keep them comfortable with that and moving forward. No one could have been more ideal, in other words.

And our group delved boldly into the possibilities of machine learning. Most compellingly, I thought, these ritualistic performances captured a moment of transformation for our own sense of being human, as if folding this technological moment in against itself to reach some new witchcraft, to synthesize a new tribe. If we were suddenly transported to a cave with flickering electronic light, my feeling was that this didn’t necessarily represent a retreat from tech. It was a way of connecting some long human spirituality to the shock of the new.

This wasn’t just about speculating about what AI would do to people, though. Machine learning applications were turned into interfaces, making gestures and machines interact more clearly. The free, artist-friendly Wekinator was a popular choice. That stands in contrast to corporate-funded AI and how that’s marketed – which is largely as a weird, consumer convenience. (Get me food reservations tonight without me actually talking to anyone, and then tell me what music to listen to and who to date.)

Here, instead, artists took machine learning algorithms and made it another raw material for creating instruments. This was AI getting the machines to better enable performance traditions. And this is partly our hope in who we bring to these performance hacklabs: we want people with experience in code and electronics, but also performance media, musicology, and culture, in various combinations.

(Also spot some kinetic percussion in the first piece, courtesy dadamachines.)

Check out the short video excerpt or scan through our whole performance documentation. All documentation courtesy CTM Festival – thanks. (Photos: Stefanie Kulisch.)

Big thanks to the folks who give us support. The CTM 2018 MusicMakers Hacklab was presented with Native Instruments and SHAPE, which is co-funded by the Creative Europe program of the European Union.

Full audio (which makes for nice sort of radio play, somehow, thanks to all these beautiful sounds):

Full video:

2018 participants – all amazing artists, and ones to watch:

Adrien Bitton
Alex Alexopoulos (Wild Anima)
Andreas Dzialocha
Anna Kamecka
Aziz Ege Gonul
Camille Lacadee
Carlo Cattano
Carlotta Aoun
Claire Aoi
Damian T. Dziwis
Daniel Kokko
Elias Najarro
Gašper Torkar
Islam Shabana
Jason Geistweidt
Joshua Peschke
Julia del Río
Karolina Karnacewicz
Marylou Petot
Moisés Horta Valenzuela AKA ℌEXOℜℭℑSMOS
Nontokozo F. Sihwa / Venus Ex Machina
Sarah Martinus
Thomas Haferlach



For some of the conceptual and research background on these topics, check out the Input sessions we hosted. (These also clearly inspired, frightened, and fired up our participants.)

A look at AI’s strange and dystopian future for art, music, and society

Minds, machines, and centralization: AI and music

The post What culture, ritual will be like in the age of AI, as imagined by a Hacklab appeared first on CDM Create Digital Music.

MusicMakers Hacklab Berlin to take on artificial minds as theme

AI is the buzzword on everyone’s lips these days. But how might musicians respond to themes of machine intelligence? That’s our topic in Berlin, 2018.

We’re calling this year’s theme “The Hacked Mind.” Inspired by AI and machine learning, we’re inviting artists to respond in the latest edition of our MusicMakers Hacklab hosted with CTM Festival in Berlin. In that collaborative environment, participants will have a chance to answer these questions however they like. They might harness machine learning to transform sound or create new instruments – or even answer ideas around machines and algorithms in other ways, through performance and composition ideas.

As always, the essential challenge isn’t just hacking code or circuits or art: it’s collaboration. By bringing together teams from diverse backgrounds and skill sets, we hope to exchange ideas and knowledge and build something new, together, on the spot.

The end result: a live performance at HAU2, capping off a dense week-plus festival of adventurous electronic music, art, and new ideas.

Hacklab application deadline: 05.12.2017
Hacklab runs: 29.1 – 4.2.2018 in Berlin (Friday opening, Monday – Saturday lab participation, Sunday presentation)

Apply online:
MusicMakers Hacklab – The Hacked Mind – Call for works

We’re not just looking for coders or hackers. We want artists from a range of backgrounds. We want people to wrestle with machine learning tools – absolutely, and some are specifically designed to train to recognize sounds and gestures and work with musical instruments. But we also hope for unorthodox artistic reactions to the topic and larger social implications.

To spur you on, we’ll have a packed lineup of guests, including Gene Kogan, who runs the amazing resource ml4a – machine learning for artists – and has done AV works like these:

And there’s Wesley Goatley, whose work delves into the hidden methods and biases behind machine learning techniques and what their implications might be.

Of course, machine learning and training on big data sets opens up new possibilities for musicians, too. Accusonus recently explained that to us in terms of new audio processing techniques. And tools like Wekinator now use training machines as ways of more intelligently recognizing gestures, so you can transform electronic instruments and how they’re played by humans.

Dog training. No, not like that – training your computer on dogs. From ml4a.

Meet Ioann Maria

We have as always a special guest facilitator joining me. This time, it’s Ioann Maria, whose AV / visual background will be familiar to CDM readers, but who has since entered a realm of specialization that fits perfectly with this year’s theme.

Ioann wrote a personal statement about her involvement, so you can get to know where she’s come from:

My trip into the digital started with real-time audiovisual performance. From there, I went on to study Computer Science and AI, and quickly got into fundamentals of Robotics. The main interest and focus of my studies was all that concerns human-machine interaction.

While I was learning about CS and AI, I was co-directing LPM [Live Performers Meeting], the world’s largest annual meeting dedicated to live video performance and new creative technologies. In that time I started attending Dorkbot Alba meet-ups – “people doing strange things with electricity.” From our regular gatherings arose an idea of opening the first Scottish hackerspace, Edinburgh Hacklab (in 2010 – still prospering today).

I grew up in the spirit of the open source.

For the past couple of years, I’ve been working at the Sussex Humanities Lab at the University of Sussex, England, as a Research Technician, Programmer, and Technologist in Digital Humanities. SHL is dedicated to developing and expanding research into how digital technologies are shaping our culture and society.

I provide technical expertise to researchers at the Lab and University.

At the SHL, I do software and hardware development for content-specific events and projects. I’ve been working on long-term jobs involving big data analysis and visualization, where my main focus for example was to develop data visualization tools looking for speech patterns and analyzing anomalies in criminal proceedings in the UK over the centuries.

I also touched on the technical possibilities and limitations of today’s conversational interfaces, learning more about natural language processing, speech recognition and machine learning.

There’s a lot going on in our Digital Humanities Lab at Sussex and I’m feeling lucky to have a chance to work with super brains I got to meet there.

In the past years, I dedicated my time speaking about the issues of digital privacy, computer security and promoting hacktivism. That too found its way to exist within the academic environment – in 2016 we started the Sussex Surveillance Group, a cross-university network that explores critical approaches to understanding the role and impact of surveillance techniques, their legislative oversight and systems of accountability in the countries that make up what are known as the ‘Five Eyes’ intelligence alliance.

With my background in new media arts and performance and some knowledge, in computing I’m awfully curious about what will happen during the MusicMakers Hacklab 2018.

What fascinating and sorrowful times we happen to live in. How will AI manifest and substantiate our potential, and how will we translate this whole weight and meaning into music, into performing art? It going to be us for, or against the machine? I can’t wait to meet our to-be-chosen Hacklab participants, link our brains and forces into a creative-tech-new – entirely IRL!

MusicMakers Hacklab – The Hacked Mind – Call for works

In collaboration with CTM Festival, CDM, and the SHAPE Platform.
With support from Native Instruments.

The post MusicMakers Hacklab Berlin to take on artificial minds as theme appeared first on CDM Create Digital Music.

Echtzeit Audio Sampling-Beat-Tool Timetosser

Alt Audio - Timetosser

Der Timetosser von Alter Audio ist ein kleines Gerät mit 16 Tasten und farbiger Beleuchtung. Es wird schlicht zwischen der Audioquelle und dem Pult platziert und erlaubt eine Art Echtzeit-Performance oder “Remixing” des durchlaufenden Audiosignals. Es ist also der Form halber ein Effektgerät oder/und ein kleiner Sampler.

Über die Reihe unten kann man Segmente eines Beats anhalten und wiederholen oder auch rückwärts ablaufen lassen, um schnell und einfach Variationen herzustellen. Die wohl wichtigste Funktion des Timetosser ist eine Art Recorder oder Sampler, bei dem man einfach genau im Beat die Taster oben verwendet, um Schläge aus dem Beat heraus aufzunehmen und sie dann wieder abzuspielen. Im Beispiel sind das klanglich identische Stücke, was aber nicht so sein muss. Denn dann würde ein Pad bereits reichen. So lassen sich bei entsprechend weiterlaufendem Beat aus dem Zuspieler die Samples auch als Reihe oder einzeln beliebig abspielen.

Echtzeit-Remixe mit dem Timetosser

Diese Teilung ist daher einfach, da sie über den simplen musikalischen Faktor der Teilung hervorgerufen werden – 1/4 bis 1/16 startet das Sampling und antippen und damit weiß die Maschine auch, welche Teilung die richtige ist. Es lassen sich auch umgekehrt Stellen stummschalten. Damit die Maschine das Tempo kennenlernt, hat sie eine Tap-Tempo-Funktion. Um die triolischen Varianten zu erreichen, tippt man die Taster 1/4 – 1/16 doppelt an. Die Art der Bedienung ist sehr intuitiv.

Technisch gesehen sind neben den Cinch-Anschlüssen für Audio auch USB und Sync als Miniklinkenanschluss zu sehen. Einen Preis oder ein genaues Lieferdatum gibt es nicht, jedoch ist dies kein Prototyp mehr, er funktioniert offenbar bereits prima. Daher sollte auf der Site von Alter.Audio bald eine Bestellfunktion auftauchen.


Ein älteres Demovideo

There’s a synth symphony for 100 cars coming, based on tuning

100 cars, 100 sound systems, 100 different versions of the pitch A: Ryoji Ikeda has one heck of a polyphonic automobile synthesizer coming.

The project is also the first new hardware from Tatsuya Takahashi after the engineer/designer stepped down from his role heading up the analog gear division at KORG. And so from the man who saw the release of products like the KORG volca series and Minilogue during his tenure, we get something really rather different: a bunch of oscillators connected to cars to produce sound art.

Tats teams up for this project with Maximilian Rest, the man behind boutique maker E-RM, who has proven his obsessive-compulsive engineering chops on their Multiclock.

And wow, that industrial design. From big factories to small run (100 units), Tats has come a long way – and this is the most beautiful design I’ve seen yet from Max and E-RM. It’s a drool-worthy design fetish object recalling Dieter Rams and Braun.

I spoke briefly to Tatsuya to get some background on the project, though the details will be revealed in the performance in Los Angeles and by Red Bull Music Academy.

The original hardware is simple. In almost a throwback to the earliest days of electronic music, the boxes themselves are just tone generators. Those controls you see on the panel determine octave and volume. Before the performance, details on the execution are a bit guarded, but this sounds like just the sort of simple box that would perfectly match Max’s insanely perfectionist approach.

What makes this tone generator special is, there are a hundred of them, each hooked up to one of one hundred cars.

Yeah, you heard right: we’re talking massively polyphonic, art-y ghetto blasting. The organizers say the cars were selected for their unique audio systems. (Now, that’s my way of being a car fan.) Car owners even contributed special cars to the symphony, making this an auto show cum sound happening, evidently both in an installation and performance.

One hundred cars tuned to the same frequency would sound like … well, phase cancellation. So each oscillator is tuned to a different frequency, in a kind of museum of what the note “A” has been over the years. The reality is, we’re probably hearing a whole lot of classical music in the “wrong” key, because the tuning of A was only in standardized in the past century. (Even today, A=440Hz and A=442Hz compete in symphonies, with A=440Hz is the most common in general use, and near-universal in electronic music.)

That huge range is part of why any discussions of the “mathematically pure” or “healing” 432 Hz is, well, nonsense. (I can deal with that some time if you really want, but let’s for now file it under “weird things you can read on the Internet,” alongside the flat Earth.)

Once you get away from the modern blandness of everything being 440 Hz, or the pseudo-science weirdness of the 432 Hz cult, you can discover all sorts of interesting variety. For instance, one of the oscillators in the performance is tuned to this:

A = 376.3Hz
*1700 : Pitch taken by Delezenne from an old dilapidated organ of l’Hospice Comtesse, Lille, France

Hey, who’s to say that particular organ isn’t the one “tuned to the natural frequency of the universe”?

You’ll get all those frequencies in some huge, wondrous cacophony if you’re lucky enough to be in LA for the performance.

It’s presented as part of the Red Bull Music Academy Music Festival, October 15. (I have no idea how you’d evaluate the claim that this is the largest-ever symphony orchestra, though with one hundred cars, it’s probably the heaviest! If anyone has historical ideas on that, I’m all ears.)

And of course, it’s in the perfect place for a piece about cars: Los Angeles. Wish I were there; let us know how it is!


Photo credit: Carys Huws for RBMA.

The post There’s a synth symphony for 100 cars coming, based on tuning appeared first on CDM Create Digital Music.

Kann man mit Apples iPhone X Musik oder sogar mehr machen?

Apple iPhone X

Apple hat gestern drei neue iPhones bekannt gegeben, darunter das iPhone X. Das Besondere ist nicht nur der neue schnellere A11 Bionic Chip, sondern etwas, was vielleicht später eine kreative Quelle werden könne.

Gemeint ist die Phalanx von Kameras und Sensoren des neuen iPhone X. Das iPhone 8 und 8+ sind weitgehend vom gleichen Potential wie bisherige iPhones. Sie sind schneller und die Ergebnisse für die Kameras können schneller berechnet werden und damit Rauschen verhindern. So auch beim iPhone X, aber was hat das nun mit Musik zu tun?

Apple iPhone X und die Musik

Eigentlich könnte diese neue FaceID-Sensorensammlung tun, was bisher Kinect tat, nämlich Gesten, Handstellungen und vieles mehr erkennen. Mit diesem Instrumentarium habe ich schon ganze Kunstausstellungen gefüllt gesehen, in denen 3D-Objekte auf Pappe projiziert werden und aussehen wie lebende Organismen, die die eigenen Tanzbewegungen in Muster umsetzen können. So könnte man selbstverständlich auch Musik erzeugen.

Die Möglichkeiten könnten Bewegungen eines Drummers oder Musikers umsetzen und damit Instrumente und Sounds einstarten helfen oder eine Art Theremin erschaffen, welches nicht nur auf die Entfernungen der Hände zu zwei Antennen berücksichtigen, sondern auch deren Stellungen.

Zauberwort Bewegungserkennung

Wieso sollte so etwas nicht eine Reihe Sampleloops auslösen können? Oder Audio in der Luft “aufnehmen“. Dies alles kann ein iPhone heute locker bewältigen, da schon der Vorgänger-Prozessor mit 6 Kernen in etwa die Kraft eines aktuellen 13” Macbooks hat. Dass diese Anwendungen auch VJs und Visual-Artists helfen könnte oder neuartige Controller aufbauen lassen könnten, liegt vermutlich auf der Hand. Denn das iPhone X besitzt einen Infrarot-Sensor, der auf Wärme reagiert – das liegt in der Natur der Sache.

iPhone X – alles in einem

Und wer Anwendungen von Kinect kennt, kennt vielleicht auch Instrumente wie das Space Palette von Tim Thompson. Das besteht aus einem selbstgebauten Rahmen und in einer Ecke des Raumes steht ein Kinect mit Rechner – so etwas kann das iPhone X heute allein vollbringen und könnte so aussehen. Wichtig dabei ist, dass das iPhone sowohl Klangerzeuger, Controller als auch Gesteneingabemaschine in einem wäre und verdammt portabel. Wesentlich weniger Aufwand ist nötig, als Kinect und Rechner aufstellen zu müssen. Man müsste bestenfalls einen Ständer haben, um die Kameras so auszurichten, dass sie auf den Performer zeigen. Und dann noch das nötige Kleingeld …

Das iPhone X auf den ersten Blick

Endlich mal ein Arpeggiator für alle – Der N(oo)DL(e)R

Conductive Labs NDLR Arpeggiator

Niemand weiss wofür NDLR steht. Es steht nämlich auf keiner Website dieser Welt. Aber dafür wird genau erklärt, was das NDLR macht. Die Firma dahinter nennt sich Conductive Labs – speziell Steven Barile ist der Mann der Stunde.

Die beiden sprechen das Gerät “Noodler” aus, also handelt es sich um den ersten offiziellen “Rumfummler”. Es verschickt MIDI Noten und wird wahlweise in Flächen (Pads), Bass und natürlich Arpeggios aufgebrochen. Man kann für Flächen gleich mehrere Synthesizer und eine Anzahl Stimmen wählen, und zusätzlich dann einen “Teppich” bezüglich der Noten im oberen und unteren Bereich. Quasi ein Instant-Flächen-Generator.

Für Bässe gibt es das auch, nur ist der natürlich monophon und normalerweise auf nur einem MIDI-Kanal. Die Muster dazu können anscheinend alle generiert und das Tempo frei justiert werden. Für den Bass gibt es Parameter für Rhythmik und Variationen, um sofort neue Bassläufe zu bekommen. Natürlich kann man auch Transponieren. Es ist also tatsächlich ein ausgewachsenes Performance-Tool. Arpeggiator und Co. funktionieren polyphon und funktionieren wie man es erwaret. Im Video hingegen sieht man sachen, die sonst nicht damit möglich sind.

Das NDLR-Projekt ist ein Kickstarter-Crowdfunding. Die günstigste Möglichkeit einen “Noodler” zu bekommen kostet 175 USD und wird im März 2018 fertig sein. Das erscheint angemessen, die Hardware scheint wertig, es gibt ein 8-Parameter Display und 8 Endlos-Encoder sowie einige Taster um die passenden Akkorde zu justieren.

Die beiden Entwickler Daryl und Steve kommen offenbar aus der Chip-Industrie und wollen jetzt sich den Musikern widmen. Auf der Site findet man auch ein technisches Video für Nerds, die wissen wollen was und wie das Gerät technisch funktioniert.

Hardware Granular-Sampler GR-1 geplant von Tasty Chips

Tasty Chips GR-1 Synthesizer

Viele wünschen sind eine Art Performance-Sampler, der Knöpfe oder Fader hat. Eine echte Hardware, die auch für sich alleine funktioniert. Genau das möchte Tasty Chips Electronics mit dem GR-1 jetzt umsetzen.

Der Sampler sieht in der aktuellen gerenderten Vorfassung eine Hüllkurve und weitere Fader vor, die vermutlich im Sample-Material etwas bewirken oder die Dichte einstellen. Zumindest deuten die Parameternamen darauf hin. Auch die Potis weiter unten sind mit Spray und ähnlichen Wortschöpfungen im klassischen Sampling-Granular-Bereich und möchten wohl genau das exakt steuerbar wissen, während der Fader unter dem Display die Position im “Sample” sein könnte. Man bekommt auch ein Display mit der Wellenform des entsprechenden Samples geliefert. Ob dies ein Touchscreen ist, ist noch nicht klar, denn die Crowdfunding-Aktion wird erst noch bekanntgegeben werden.

Tasty Chips – viel Ungewöhnliches

Tasty Chips sind für ungewöhnliche Projekte bekannt. Darunter sind auch ein Synthesizer, der den SID-Sound per Poti-Bedienung wesentlich zugänglicher machen soll oder Experimente mit Samples. Daher lag es offenbar nahe, einen Schritt weiterzugehen und einen echten Granular-Sampler zu bauen, der den Möglichkeiten der heutigen Technik entspricht.

Granulare Verarbeitung eignet sich auch dazu, Samples faktisch einzufrieren. Der letzte und auch erste Synthesizer mit dieser Technik war Rolands V-Synth GT  und dessen Vorgänger. Noch zu klären sind sicher Fragen, wie die Notwendigkeit der Vorbereitung von Samples und ob das Gerät selbst Samples versteht oder sie per Card oder USB eingeflößt bekommt oder beides.

Auf der Site stehen noch Hinweise auf Mobilgeräte, die aber eher so erscheinen, als gehören sie zu einer Schablone für einen Website-Baukasten. Deshalb müssen wir noch abwarten, bis die Informationen vollständig sind. Generell hat das Gerät viel Potential.

Der GR-1, so der offizielle Name, wird polyphon sein.

Weitere Informationen werden über Facebook und natürlich der Website des Herstellers zu finden sein, sobald sie bereit sind.

Es gibt bereits eine Reihe von Audiodemos

Inside the transformational AV duo of Paula Temple and Jem the Misfit

Paula Temple and Jem the Misfit are working on the latest iteration of a project about transformation. It melts and fragments, crystallizes and forms, from its rich palette of hybridized techno and ambient textures, sonic and visual alike.

And now, it’s set to be involved in some way in transformation beyond just the confines of a single performance – as a statement about what society might do differently and how artists can contribute. With NODE Forum in Frankfurt am Main, Germany coming this weekend, the duo will premiere Nonagon II, a sequel to their stunning 2014 AV show in Amsterdam’s retina-popping EYE cinema (as one of the real highlights of that year’s Amsterdam Dance Event). They’re looking to extend a profound but sadly, rarely-seen collaboration into updated structures while engaging NODE’s activist theme, “Designing Hope.”

That makes for a perfect time for CDM to join the two together – Paul Temple, the techno legend (R&S Records) known for her brutal produtions, and Jem the Misfit, one of the top practitioners of live visual performance.

For reference, here’s a look at the previous iteration, though we’re keen to see the new evolution:

Jem the Misfit (aka Jemma Woolmore), left, with Paula Temple, right.

Jem the Misfit (aka Jemma Woolmore), left, with Paula Temple, right.

CDM: First, I think from an AV standpoint, it’s really significant that you’re together on stage. Obviously that sends a message to the audience, but what does it mean for playing together? Are you communicating there – even if just by your presence?

Jem: Paula and I work closely together before and during the show. Being on stage physical is really important for timing and connection in the performance; we give each other verbal cues, but also react with our body language. We also work closely together before the show, practicing and discussing the ideas and flow of the performance. It is also important that we are both onstage to highlight that this is a collaboration between two artist working together to build the show.

Jemma, it feels like what you’re doing is really cinematic, but it also breaks up that rectangle (with geometries, etc.). What’s your approach to the screen here? Of course, in the first version, you were in an actual cinema – where might this go in future?

Jem: Breaking the regular rectangle of the screen is something I try to achieve in all my performances. With the Nonagon show, I have a clear geometric language built around the nine-sided nonagon form and I construct abstract forms using MadMapper to translate the visuals through these geometries. As you say, the Nonagon show is highly cinematic and was originally designed for a cinema context for our show at The Eye in Amsterdam. For Nonagon II at NODE, I am using a little less of the Nonagon geometries and instead moving from these fixed, tight geometries, eventually breaking their borders and allowing the visuals to flow across the screen as the show develops. I am also interested in putting emphasis on light intensity and color to influence mood in this version of the show. In future iterations I could envisage this leading to more development in using lighting as well as video and bringing the geometries off the main screen and out into 3D space.


Paula, this is a different sound world than a lot of people know from you. Is there a connection to the techno productions they may know better? Does that impact the approach to timbre, to rhythm?

Paula: I think it is the same sound world, just not as strictly dance floor-aimed. But I know what you mean, it even surprises me how people who follow my music easily recognizes my style in my more experimental live sets. It is one reason why I prefer to perform the experimental sets at festivals such as UNSOUND or INTONAL or the NONAGON II AV at NODE; the crowd knows my music more like an emotional expression and can therefore connect to the music beyond a released piece of music. There’s still recognizable elements, like from my track called Deathvox. When I’m producing I never consciously think about timbre or rhythm — that way of thinking is too detached. I’m feeling emotionally, I’m opening my sensory gating channels, connecting feelings into electronic sound without thinking too technically, and therefore being deeply immersed in that state to give a translation of those emotions through sound. People who really like my music seem to be tuned into that state too.

https://soundcloud.com/paulatemple/deathvox-deathvox-ep [embedding not allowed here]

Can you tell us a bit about the sound world here? What are its sources; how was it produced?

Paula: The sources to me are the thoughts and feeling that develop into these pieces. Lately, they have come from reflecting on social injustices happening and dystopian dreams, or even falling asleep to movies and waking up at a scary moment!

For example, one track has a working title called “Earth,” where I would have a recurring dream where everything green — plants, trees, vegetables — turns black and dies within seconds, and Earth is so hurt, so angry at what we humans have done, that Earth asks the Sun for help and asks the Sun to eat Earth. I remember at the time of making “Earth,” I was trying to watch the movie Melancholia and as always, I fall asleep and then I’m waking up as the movie ends, still half asleep, wondering what’s happening!

When producing, I am working in Ableton Live, with customized drum devices I’ve developed in the last 3 years and jamming on my [Dave Smith Instruments] Oberheim OB-6 or a virtual instrument like Tension [in Ableton Suite].

You’ve changed the music here for this edition, I know. What’s new in this version?

Paula: We’ve decided to keep the remix I made for Fink in the show as the lyrics literally relate to hope, not giving up. Plus there are new pieces relating to what Jem has also been inspired by lately, such as corporate made environmental or socioeconomic regressions and aggression, Entanglement or the Angela Davies book Freedom is Constant Struggle.

Jemma, how did you work on the visual material; how was it influenced by that music? I know there was some shooting of stuff melting, but … how did that come about; where was the design intention on your side and how did you collaborate together on that?

Jem: For the original Nonagon show, Paula and I developed the music and visuals in tandem, based around a common structure that included working in 9 parts and using 9 specific actions (such as distort, reverse, stretch etc) to apply visually or musically. This lead me to find ways of manipulating form both in virtual space but also using real forms, as you say, building and melting geometric objects and capturing this in time-lapse. So visually, Nonagon was about applying these specific actions to geometries and moving through a exploration of form, in connection with Paula also manipulating her sound in similar ways.

In Nonagon II, the focus has shifted from purely formal aims to more specific thematic ideas. When NODE approached me about performing at the festival, their theme ‘Designing Hope’ really caught me as a challenge, and I knew Paula would also be interested in tackling this theme. When I contacted Paula about NODE, we both agreed that we should shift the focus in Nonagon to try and address this idea of designing or generating hope through our performance – hence creating Nonagon II.

Our approach to the theme is that there can be no hope without action. So as well as Paula’s action to donate her fee to the charity Women in Exile, the new trajectory for Nonagon II is to move from a place of fear through to an empowering place of action. Through the show we transition from simplification to complexity, individuality to multiplicity, fear to action.



Visually, I am signifying this (again) through geometries that develop from simple shapes into complex systems, falling, melting and merging along the way, using color and light intensity to transform the emotional impact throughout the show.

Interestingly, in the time since we last worked together – which is over a year – Paula and I have found that our ideas and development in our work have followed similar processes and align in many areas. We have both independently decided to use the term ‘entanglement,’ this idea that everything is linked and that over-simplification of systems, ignoring their relationship to one another is incredibly dangerous – for instance, the supposed self-maintaining economic system championed by neo-liberalism, ignoring its entangled relationship with climate and natural resource systems. We also have both read Angela Davis book ‘Freedom is a constant struggle,’ which also talks about building connections across political movements and the importance of moving outside narrowly-defined communities and working together.

Also, the idea of acknowledging fragility in the balance of all our systems and having some humility in regard to our place in this universe has been important for both our practices.jemmisfit

Can you each describe a bit your live rig onstage? Now, presumably we’re meant to be watching the screen, not you two, but is it important for you to be able to make this a live improvisation?

For the visual set up, I am running Resolume [VJ/visual performance tool/media server] and MadMapper software, and using the Xone:K2 MIDI controller from Allen & Heath. There is no pre-programmed timeline in any of this setup, so it is all improvised. Paula and I like to practice the performance several times so that we have worked through the flow and impact of specific points in the show, but we are able to improvise fully making each performance unique.

Paula: My set up is simple — Ableton Live, Push 2 controller and Allen & Heath K2 controller. I care more about the music working succinctly with Jem’s visuals to encourage the audience to feel, to reflect within or get a sense taking some kind of positive action, than about making it a live improvisation.


“Designing Hope” is the theme of this year’s NODE. Paula, I understand you donated your fee – what’s your intention as far as doing something socially active, with this project, or with other projects?

Paula: Considering the theme ‘Designing Hope’ came the simple question to reflect on, who needs hope the most right now? Then looking at who locally is giving hope and I learned about Women in Exile, a non profit organization founded in 2002 by refugee women who work closely with refugee women in and around Brandenburg and Berlin.

In their activities, Women in Exile visit the refugee camps in Brandenburg to offer proactive support to refugee women from the perspective of those affected, to exchange information on what is going on and to gather information on the needs of women living in the camps. They organize seminars and workshops for refugee women in different topics on how to improve their difficult living situation and develop perspectives to fight for their rights in the asylum procedure and to defend themselves against sexualized/physical violence, discrimination and exclusion. They present the current issues, such as the hopelessness of deportation, to different organization nationwide in order to bring awareness to refugee women issues to the society. They give an incredible amount of energy and support to women whose world have turned upside down. Donating a fee is the least we could do. We hope, with our best intentions, is to invite others at the event to think about who are we designing hope for.

[Ed.: I’m familiar with this organization, too – you can find more or contact them directly:]


What does it mean to be involved with NODE here, and with this community? (Realizing neither of us is a VVVV user, Jemma, but of course there’s more than that! Curious if that’s meaningful to you to be able to soak up some of that side of this, too.)

Jem: I think we are both excited about being involved at NODE this year and interacting with a community that is working at the intersection of technology and art as well as pushing ideas around how art/tech crossover can be used to inspire communities outside of art+tech. This is where I see our performance fitting even if we are not specifically using VVVV. Personally, I am looking forward to a few extra days at the festival and exploring the possibilities of VVVV, as well as meeting at the VVVV community and exploring possible crossovers in our work.




The post Inside the transformational AV duo of Paula Temple and Jem the Misfit appeared first on CDM Create Digital Music.