Reason 10.3 delivers on VST performance promises

We’ve been waiting, but now the waiting is done. Propellerhead has added the VST performance boost it had promised to Reason users – meaning plug-ins now benefit from Reason 10’s patchable racks.

I actually flew to Stockholm, Sweden back in the dark days of December to talk to the engineers about this update (among some other topics). Short version of that story: yeah, it took them longer than they’d hoped to get VST plug-ins operating as efficiently as native devices in the rack.

If you want all those nitty-gritty details, here’s the full story:

Reason 10.3 will improve VST performance – here’s how

But now, suffice to say that the main reason for the hold-up – Reason’s patchable, modular virtual rack of gear – just became an asset rather than a liability. Now that VSTs in Reason perform roughly as they will in other DAWs, what Reason adds is the ability to route those plug-ins however you like, in Reason’s unique interface.

Combine that with Reason’s existing native effects and instruments and third-party Rack Extensions, and I think Reason becomes more interesting as both a live performance rig and a DAW for recording and arranging than before. It could also be interesting to stick a modular inside the modular – as with VCV Rack or this week’s Blocks Base and Blocks Prime from Native Instruments.

Anyway, that’s really all there is to say about 10.3 – it’s what Propellerhead call a “vitamin injection” (which, seeing those dark Swedish winters, I’m guessing all of them need about now.

This also means the engineers have gotten over a very serious and time-consuming hurdle and can presumably get onto other things. It’s also a development for the company that they’ve been upfront in talking about a flaw both before, during, and concluding development – and that’s welcome from any music software maker. So props to the Props – now go get some sunshine; you’ve earned it. (and the rest of us can tote these rigs out into the park, too)

Reason: what’s new

The post Reason 10.3 delivers on VST performance promises appeared first on CDM Create Digital Music.

Modular: Verrücktes 3D-Drehrack Silent Strike QB – you spin me around..

strike-qbstrike-qb

Rumänien habe ich bisher nicht so auf dem Schirm als Land der Elektronik-Performance, aber es scheint ein Land der kreativen Ideen zu sein? Zumindest ist es Ioan Bârlădeanu, der das Rack gebaut hat. Ein beleuchtetes Rack, welches an vier Seiten Module eingebaut hat und auch nach oben hin Zugang auf Module hat, nutzt den Platz am besten und es sieht sehr gut aus. Der Name: QB (=“Cube“).

Wenn man eine Performance auf der Bühne oder auch nur für sich oder für ein Video macht, dann steht normalerweise der Mensch mit dem Rücken zum Publikum oder vielleicht seitlich und wenn man „Pech“ hat, ist das System so riesig, dass man eigentlich nur eine Wand sieht. Da hat man sich etwas Gedanken gemacht und es auch optisch schöner gemacht, denn es gibt auf jeder Würfelseite jeweils zwei beleuchtete Schächte. Da es hier ja 5 Seiten sind, die man erreichen kann, sollte man zuoberst die wichtigsten Elemente wie Sequencer und Co unterbringen und in den Seiten dann die weiteren Module.

Bis zu 560HP Breiteneinheiten kann das System schlucken, was in einem großen Rack schon deutlich mehr Platz verschwenden würde, außerdem sieht es in dieser Form einfach besser aus. Das Case schwebt scheinbar und wird unsichtbar für den Zuschauer gedreht, Ioan Bârlădeanu ist der erste der es auch gemacht hat. Es wirkt einfach als Idee, wieso ist man da nicht selbst drauf gekommen?

Das einzige was man wirklich benötigt sind längere Kabel und ein bisschen räumlicheres Denken. Das uncoole Monsterzeug muss also nicht sein, so kann man auch als Fan von Marie Kondo aufgeräumt musizieren.

Mehr Information

Video

Mike Barclay und seine elektromechanischen Boxen als Drummachine und Klangerzeuger

Elektromechanische BoxElektromechanische Box

Fast wie ein System aus verschiedenen elektromechanischen Boxen im Volca-Stil baut Mike Barclay für seine Performance diverse spezielle Boxen. Die bestehen aus Spiralen und Hämmerchen, die mittels Einstellern klanglich verändert werden können. Dazu kommt ein zentraler Sequencer, der die Boxen jeweils ansteuert. 

Nicht wirklich zum Kauf, aber zur Inspiration höchst anregend zeigt heute Mike Barclay in einem kleinen Video auf seiner Facebook-Präsenz, wie die drei Boxen vom Sequencer gesteuert werden und dabei total mechanisch verschiedene Dinge auslösen. Die linke Box ist mechanisch am aufwendigsten und zeigt einige Metallzungen und einen Stößel, der ein in Schwingung gebrachtes Röhrchen antriggert. Das klingt wie eine Mischung aus Gummiband und Bassdrum.

Elektromechanisch – Box 2

Die zweite Box enthält eine Spirale, die von zwei Stellen aus angeregt wird und „elektrische“ typische Federhall-Manipulationen erzeugt. Das kennen einige vielleicht auch schon von der Microphonic Soundbox, nur dass die hier auch mechanisch-elektrisch angestoßen wird.

Die dritte Variante, Snare?

Die Snare oder auch Hi-Hat wird durch die dritte Elektro-Box repräsentiert. Hier kann man auch den Klang „dumpfer“ machen und bedämpfen. Das Klangprinzip ist ähnlich der ersten Box, jedoch anders und etwas einfacher aufgebaut und seitlich angeordnet. Das alles ist sehr schön gearbeitet und perfekt für eine sichtbare Bühnenperformance gemacht und gedacht.

Es ist vermutlich nicht so, dass daraus ein käufliches Produkt wird. Dennoch sind sie alle mit Stil und optischer Einheit geradezu bereit dafür.

Mehr dazu?

Wer sich das mal ansehen möchte, sollte generell auf Mikes Facebook-Chronik mal nachsehen und dort auch etwas stöbern. Es handelt sich um einen Künstler, weniger um einen „Hersteller“. Es gibt natürlich keine Preise, keine Lieferbarkeit und keine Website dazu, es ist einfach „nur so“ gebaut worden. Herrlich!

Video

Reason 10.3 will improve VST performance – here’s how

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

Machine learning is presented variously as nightmare and panacea, gold rush and dystopia. But a group of artists hacking away at CTM Festival earlier this year did something else with it: they humanized it.

The MusicMakers Hacklab continues our collaboration with CTM Festival, and this winter I co-facilitated the week-long program in Berlin with media artist and researcher Ioann Maria (born in Poland, now in the UK). Ioann has long brought critical speculative imagination to her work (meaning, she gets weird and scary when she has to), as well as being able to wrangle large groups of artists and the chaos the creative process produces. Artists are a mess – as they need to be, sometimes – and Ioann can keep them comfortable with that and moving forward. No one could have been more ideal, in other words.

And our group delved boldly into the possibilities of machine learning. Most compellingly, I thought, these ritualistic performances captured a moment of transformation for our own sense of being human, as if folding this technological moment in against itself to reach some new witchcraft, to synthesize a new tribe. If we were suddenly transported to a cave with flickering electronic light, my feeling was that this didn’t necessarily represent a retreat from tech. It was a way of connecting some long human spirituality to the shock of the new.

This wasn’t just about speculating about what AI would do to people, though. Machine learning applications were turned into interfaces, making gestures and machines interact more clearly. The free, artist-friendly Wekinator was a popular choice. That stands in contrast to corporate-funded AI and how that’s marketed – which is largely as a weird, consumer convenience. (Get me food reservations tonight without me actually talking to anyone, and then tell me what music to listen to and who to date.)

Here, instead, artists took machine learning algorithms and made it another raw material for creating instruments. This was AI getting the machines to better enable performance traditions. And this is partly our hope in who we bring to these performance hacklabs: we want people with experience in code and electronics, but also performance media, musicology, and culture, in various combinations.

(Also spot some kinetic percussion in the first piece, courtesy dadamachines.)

Check out the short video excerpt or scan through our whole performance documentation. All documentation courtesy CTM Festival – thanks. (Photos: Stefanie Kulisch.)

Big thanks to the folks who give us support. The CTM 2018 MusicMakers Hacklab was presented with Native Instruments and SHAPE, which is co-funded by the Creative Europe program of the European Union.

Full audio (which makes for nice sort of radio play, somehow, thanks to all these beautiful sounds):

Full video:

2018 participants – all amazing artists, and ones to watch:

Adrien Bitton
Alex Alexopoulos (Wild Anima)
Andreas Dzialocha
Anna Kamecka
Aziz Ege Gonul
Camille Lacadee
Carlo Cattano
Carlotta Aoun
Claire Aoi
Damian T. Dziwis
Daniel Kokko
Elias Najarro
Gašper Torkar
Islam Shabana
Jason Geistweidt
Joshua Peschke
Julia del Río
Karolina Karnacewicz
Marylou Petot
Moisés Horta Valenzuela AKA ℌEXOℜℭℑSMOS
Nontokozo F. Sihwa / Venus Ex Machina
Sarah Martinus
Thomas Haferlach

https://www.ctm-festival.de/archive/festival-editions/ctm-2018-turmoil/transfer/musicmakers-hacklab/

http://ioannmaria.com/

For some of the conceptual and research background on these topics, check out the Input sessions we hosted. (These also clearly inspired, frightened, and fired up our participants.)

A look at AI’s strange and dystopian future for art, music, and society

Minds, machines, and centralization: AI and music

The post What culture, ritual will be like in the age of AI, as imagined by a Hacklab appeared first on CDM Create Digital Music.

MusicMakers Hacklab Berlin to take on artificial minds as theme

AI is the buzzword on everyone’s lips these days. But how might musicians respond to themes of machine intelligence? That’s our topic in Berlin, 2018.

We’re calling this year’s theme “The Hacked Mind.” Inspired by AI and machine learning, we’re inviting artists to respond in the latest edition of our MusicMakers Hacklab hosted with CTM Festival in Berlin. In that collaborative environment, participants will have a chance to answer these questions however they like. They might harness machine learning to transform sound or create new instruments – or even answer ideas around machines and algorithms in other ways, through performance and composition ideas.

As always, the essential challenge isn’t just hacking code or circuits or art: it’s collaboration. By bringing together teams from diverse backgrounds and skill sets, we hope to exchange ideas and knowledge and build something new, together, on the spot.

The end result: a live performance at HAU2, capping off a dense week-plus festival of adventurous electronic music, art, and new ideas.

Hacklab application deadline: 05.12.2017
Hacklab runs: 29.1 – 4.2.2018 in Berlin (Friday opening, Monday – Saturday lab participation, Sunday presentation)

Apply online:
MusicMakers Hacklab – The Hacked Mind – Call for works

We’re not just looking for coders or hackers. We want artists from a range of backgrounds. We want people to wrestle with machine learning tools – absolutely, and some are specifically designed to train to recognize sounds and gestures and work with musical instruments. But we also hope for unorthodox artistic reactions to the topic and larger social implications.

To spur you on, we’ll have a packed lineup of guests, including Gene Kogan, who runs the amazing resource ml4a – machine learning for artists – and has done AV works like these:

And there’s Wesley Goatley, whose work delves into the hidden methods and biases behind machine learning techniques and what their implications might be.

Of course, machine learning and training on big data sets opens up new possibilities for musicians, too. Accusonus recently explained that to us in terms of new audio processing techniques. And tools like Wekinator now use training machines as ways of more intelligently recognizing gestures, so you can transform electronic instruments and how they’re played by humans.

Dog training. No, not like that – training your computer on dogs. From ml4a.

Meet Ioann Maria

We have as always a special guest facilitator joining me. This time, it’s Ioann Maria, whose AV / visual background will be familiar to CDM readers, but who has since entered a realm of specialization that fits perfectly with this year’s theme.

Ioann wrote a personal statement about her involvement, so you can get to know where she’s come from:

My trip into the digital started with real-time audiovisual performance. From there, I went on to study Computer Science and AI, and quickly got into fundamentals of Robotics. The main interest and focus of my studies was all that concerns human-machine interaction.

While I was learning about CS and AI, I was co-directing LPM [Live Performers Meeting], the world’s largest annual meeting dedicated to live video performance and new creative technologies. In that time I started attending Dorkbot Alba meet-ups – “people doing strange things with electricity.” From our regular gatherings arose an idea of opening the first Scottish hackerspace, Edinburgh Hacklab (in 2010 – still prospering today).

I grew up in the spirit of the open source.

For the past couple of years, I’ve been working at the Sussex Humanities Lab at the University of Sussex, England, as a Research Technician, Programmer, and Technologist in Digital Humanities. SHL is dedicated to developing and expanding research into how digital technologies are shaping our culture and society.

I provide technical expertise to researchers at the Lab and University.

At the SHL, I do software and hardware development for content-specific events and projects. I’ve been working on long-term jobs involving big data analysis and visualization, where my main focus for example was to develop data visualization tools looking for speech patterns and analyzing anomalies in criminal proceedings in the UK over the centuries.

I also touched on the technical possibilities and limitations of today’s conversational interfaces, learning more about natural language processing, speech recognition and machine learning.

There’s a lot going on in our Digital Humanities Lab at Sussex and I’m feeling lucky to have a chance to work with super brains I got to meet there.

In the past years, I dedicated my time speaking about the issues of digital privacy, computer security and promoting hacktivism. That too found its way to exist within the academic environment – in 2016 we started the Sussex Surveillance Group, a cross-university network that explores critical approaches to understanding the role and impact of surveillance techniques, their legislative oversight and systems of accountability in the countries that make up what are known as the ‘Five Eyes’ intelligence alliance.

With my background in new media arts and performance and some knowledge, in computing I’m awfully curious about what will happen during the MusicMakers Hacklab 2018.

What fascinating and sorrowful times we happen to live in. How will AI manifest and substantiate our potential, and how will we translate this whole weight and meaning into music, into performing art? It going to be us for, or against the machine? I can’t wait to meet our to-be-chosen Hacklab participants, link our brains and forces into a creative-tech-new – entirely IRL!

MusicMakers Hacklab – The Hacked Mind – Call for works

In collaboration with CTM Festival, CDM, and the SHAPE Platform.
With support from Native Instruments.

The post MusicMakers Hacklab Berlin to take on artificial minds as theme appeared first on CDM Create Digital Music.

Echtzeit Audio Sampling-Beat-Tool Timetosser

Alt Audio - Timetosser

Der Timetosser von Alter Audio ist ein kleines Gerät mit 16 Tasten und farbiger Beleuchtung. Es wird schlicht zwischen der Audioquelle und dem Pult platziert und erlaubt eine Art Echtzeit-Performance oder “Remixing” des durchlaufenden Audiosignals. Es ist also der Form halber ein Effektgerät oder/und ein kleiner Sampler.

Über die Reihe unten kann man Segmente eines Beats anhalten und wiederholen oder auch rückwärts ablaufen lassen, um schnell und einfach Variationen herzustellen. Die wohl wichtigste Funktion des Timetosser ist eine Art Recorder oder Sampler, bei dem man einfach genau im Beat die Taster oben verwendet, um Schläge aus dem Beat heraus aufzunehmen und sie dann wieder abzuspielen. Im Beispiel sind das klanglich identische Stücke, was aber nicht so sein muss. Denn dann würde ein Pad bereits reichen. So lassen sich bei entsprechend weiterlaufendem Beat aus dem Zuspieler die Samples auch als Reihe oder einzeln beliebig abspielen.

Echtzeit-Remixe mit dem Timetosser

Diese Teilung ist daher einfach, da sie über den simplen musikalischen Faktor der Teilung hervorgerufen werden – 1/4 bis 1/16 startet das Sampling und antippen und damit weiß die Maschine auch, welche Teilung die richtige ist. Es lassen sich auch umgekehrt Stellen stummschalten. Damit die Maschine das Tempo kennenlernt, hat sie eine Tap-Tempo-Funktion. Um die triolischen Varianten zu erreichen, tippt man die Taster 1/4 – 1/16 doppelt an. Die Art der Bedienung ist sehr intuitiv.

Technisch gesehen sind neben den Cinch-Anschlüssen für Audio auch USB und Sync als Miniklinkenanschluss zu sehen. Einen Preis oder ein genaues Lieferdatum gibt es nicht, jedoch ist dies kein Prototyp mehr, er funktioniert offenbar bereits prima. Daher sollte auf der Site von Alter.Audio bald eine Bestellfunktion auftauchen.

Videos

Ein älteres Demovideo

There’s a synth symphony for 100 cars coming, based on tuning

100 cars, 100 sound systems, 100 different versions of the pitch A: Ryoji Ikeda has one heck of a polyphonic automobile synthesizer coming.

The project is also the first new hardware from Tatsuya Takahashi after the engineer/designer stepped down from his role heading up the analog gear division at KORG. And so from the man who saw the release of products like the KORG volca series and Minilogue during his tenure, we get something really rather different: a bunch of oscillators connected to cars to produce sound art.

Tats teams up for this project with Maximilian Rest, the man behind boutique maker E-RM, who has proven his obsessive-compulsive engineering chops on their Multiclock.

And wow, that industrial design. From big factories to small run (100 units), Tats has come a long way – and this is the most beautiful design I’ve seen yet from Max and E-RM. It’s a drool-worthy design fetish object recalling Dieter Rams and Braun.

I spoke briefly to Tatsuya to get some background on the project, though the details will be revealed in the performance in Los Angeles and by Red Bull Music Academy.

The original hardware is simple. In almost a throwback to the earliest days of electronic music, the boxes themselves are just tone generators. Those controls you see on the panel determine octave and volume. Before the performance, details on the execution are a bit guarded, but this sounds like just the sort of simple box that would perfectly match Max’s insanely perfectionist approach.

What makes this tone generator special is, there are a hundred of them, each hooked up to one of one hundred cars.

Yeah, you heard right: we’re talking massively polyphonic, art-y ghetto blasting. The organizers say the cars were selected for their unique audio systems. (Now, that’s my way of being a car fan.) Car owners even contributed special cars to the symphony, making this an auto show cum sound happening, evidently both in an installation and performance.

One hundred cars tuned to the same frequency would sound like … well, phase cancellation. So each oscillator is tuned to a different frequency, in a kind of museum of what the note “A” has been over the years. The reality is, we’re probably hearing a whole lot of classical music in the “wrong” key, because the tuning of A was only in standardized in the past century. (Even today, A=440Hz and A=442Hz compete in symphonies, with A=440Hz is the most common in general use, and near-universal in electronic music.)

That huge range is part of why any discussions of the “mathematically pure” or “healing” 432 Hz is, well, nonsense. (I can deal with that some time if you really want, but let’s for now file it under “weird things you can read on the Internet,” alongside the flat Earth.)

Once you get away from the modern blandness of everything being 440 Hz, or the pseudo-science weirdness of the 432 Hz cult, you can discover all sorts of interesting variety. For instance, one of the oscillators in the performance is tuned to this:

A = 376.3Hz
*1700 : Pitch taken by Delezenne from an old dilapidated organ of l’Hospice Comtesse, Lille, France

Hey, who’s to say that particular organ isn’t the one “tuned to the natural frequency of the universe”?

You’ll get all those frequencies in some huge, wondrous cacophony if you’re lucky enough to be in LA for the performance.

It’s presented as part of the Red Bull Music Academy Music Festival, October 15. (I have no idea how you’d evaluate the claim that this is the largest-ever symphony orchestra, though with one hundred cars, it’s probably the heaviest! If anyone has historical ideas on that, I’m all ears.)

And of course, it’s in the perfect place for a piece about cars: Los Angeles. Wish I were there; let us know how it is!

https://la.redbullmusicacademy.com/event/ryoji-ikeda-a-for-100-cars

Photo credit: Carys Huws for RBMA.

The post There’s a synth symphony for 100 cars coming, based on tuning appeared first on CDM Create Digital Music.

Kann man mit Apples iPhone X Musik oder sogar mehr machen?

Apple iPhone X

Apple hat gestern drei neue iPhones bekannt gegeben, darunter das iPhone X. Das Besondere ist nicht nur der neue schnellere A11 Bionic Chip, sondern etwas, was vielleicht später eine kreative Quelle werden könne.

Gemeint ist die Phalanx von Kameras und Sensoren des neuen iPhone X. Das iPhone 8 und 8+ sind weitgehend vom gleichen Potential wie bisherige iPhones. Sie sind schneller und die Ergebnisse für die Kameras können schneller berechnet werden und damit Rauschen verhindern. So auch beim iPhone X, aber was hat das nun mit Musik zu tun?

Apple iPhone X und die Musik

Eigentlich könnte diese neue FaceID-Sensorensammlung tun, was bisher Kinect tat, nämlich Gesten, Handstellungen und vieles mehr erkennen. Mit diesem Instrumentarium habe ich schon ganze Kunstausstellungen gefüllt gesehen, in denen 3D-Objekte auf Pappe projiziert werden und aussehen wie lebende Organismen, die die eigenen Tanzbewegungen in Muster umsetzen können. So könnte man selbstverständlich auch Musik erzeugen.

Die Möglichkeiten könnten Bewegungen eines Drummers oder Musikers umsetzen und damit Instrumente und Sounds einstarten helfen oder eine Art Theremin erschaffen, welches nicht nur auf die Entfernungen der Hände zu zwei Antennen berücksichtigen, sondern auch deren Stellungen.

Zauberwort Bewegungserkennung

Wieso sollte so etwas nicht eine Reihe Sampleloops auslösen können? Oder Audio in der Luft “aufnehmen“. Dies alles kann ein iPhone heute locker bewältigen, da schon der Vorgänger-Prozessor mit 6 Kernen in etwa die Kraft eines aktuellen 13” Macbooks hat. Dass diese Anwendungen auch VJs und Visual-Artists helfen könnte oder neuartige Controller aufbauen lassen könnten, liegt vermutlich auf der Hand. Denn das iPhone X besitzt einen Infrarot-Sensor, der auf Wärme reagiert – das liegt in der Natur der Sache.

iPhone X – alles in einem

Und wer Anwendungen von Kinect kennt, kennt vielleicht auch Instrumente wie das Space Palette von Tim Thompson. Das besteht aus einem selbstgebauten Rahmen und in einer Ecke des Raumes steht ein Kinect mit Rechner – so etwas kann das iPhone X heute allein vollbringen und könnte so aussehen. Wichtig dabei ist, dass das iPhone sowohl Klangerzeuger, Controller als auch Gesteneingabemaschine in einem wäre und verdammt portabel. Wesentlich weniger Aufwand ist nötig, als Kinect und Rechner aufstellen zu müssen. Man müsste bestenfalls einen Ständer haben, um die Kameras so auszurichten, dass sie auf den Performer zeigen. Und dann noch das nötige Kleingeld …

Das iPhone X auf den ersten Blick

Endlich mal ein Arpeggiator für alle – Der N(oo)DL(e)R

Conductive Labs NDLR Arpeggiator

Niemand weiss wofür NDLR steht. Es steht nämlich auf keiner Website dieser Welt. Aber dafür wird genau erklärt, was das NDLR macht. Die Firma dahinter nennt sich Conductive Labs – speziell Steven Barile ist der Mann der Stunde.

Die beiden sprechen das Gerät “Noodler” aus, also handelt es sich um den ersten offiziellen “Rumfummler”. Es verschickt MIDI Noten und wird wahlweise in Flächen (Pads), Bass und natürlich Arpeggios aufgebrochen. Man kann für Flächen gleich mehrere Synthesizer und eine Anzahl Stimmen wählen, und zusätzlich dann einen “Teppich” bezüglich der Noten im oberen und unteren Bereich. Quasi ein Instant-Flächen-Generator.

Für Bässe gibt es das auch, nur ist der natürlich monophon und normalerweise auf nur einem MIDI-Kanal. Die Muster dazu können anscheinend alle generiert und das Tempo frei justiert werden. Für den Bass gibt es Parameter für Rhythmik und Variationen, um sofort neue Bassläufe zu bekommen. Natürlich kann man auch Transponieren. Es ist also tatsächlich ein ausgewachsenes Performance-Tool. Arpeggiator und Co. funktionieren polyphon und funktionieren wie man es erwaret. Im Video hingegen sieht man sachen, die sonst nicht damit möglich sind.

Das NDLR-Projekt ist ein Kickstarter-Crowdfunding. Die günstigste Möglichkeit einen “Noodler” zu bekommen kostet 175 USD und wird im März 2018 fertig sein. Das erscheint angemessen, die Hardware scheint wertig, es gibt ein 8-Parameter Display und 8 Endlos-Encoder sowie einige Taster um die passenden Akkorde zu justieren.

Die beiden Entwickler Daryl und Steve kommen offenbar aus der Chip-Industrie und wollen jetzt sich den Musikern widmen. Auf der Site findet man auch ein technisches Video für Nerds, die wissen wollen was und wie das Gerät technisch funktioniert.