Composers doing normal s*** is one of the best things on the Internet right now

Composers. You’ve seen them literally put on pedestals, in bronze and granite. Here they are in daily life – and it’s charming.

Yes, it’s “Composers doing normal s***”, and in the midst of truly grotesque things on Twitter, it’s the breath of fresh air we need right now. Behold!

https://twitter.com/NormalComposers

Classic photos are often so exceptional that we forget that sometimes … they weren’t. Here’s Varese caught in a dull moment:

Pauline Oliveros, with coffee, and also with an elephant:

Xenakis and Feldman:

Debussy, slumbering, as you would expect a French composer to do:

Laurie Anderson may be “doing normal s***” but somehow looks awesome at it, retro NYC style:

Different trains? Different bikes. It’s a good activity, as long as it’s not gonna rain.

Meanwhile, Igor Stravinsky is all over the feed, always winning … and yes, even really clicking with some animals:

Igor reigns supreme. He’s only briefly outdone, as by Lutoslawski eating soup:

I’ll just conclude with a few more… my alma mater Sarah Lawrence College, which fellow music school graduate Meredith Monk models for … well, really, kind of on the nose, the be honest:

Sorry, this is a music technology site. Fine. Pierre Schaeffer with a record player. (DJ Concrete…)

Too low tech? Here’s Penderecki with a Nokia.

Happy? But that’s kind of a down note to end on, so here’s Lenny Bernstein on a swing – pure joy:

Speaking of Stravinsky, I wish I could find some of the photos of him and other great composers lounging at his pool. But the “famous composer who would most easily fit into an episode of MTV’s cribs” is undoubtedly mister Rite of Spring himself, who escaped the Soviet Union, embraced capitalism in a major way, and found this sweet pad with an enormous pool in Hollywood. Seriously.

Igor Stravinsky home Los Angeles [Russian architectural landmarks]

…and it was on sale for a cool $4 mil [classical WCRB]

The post Composers doing normal s*** is one of the best things on the Internet right now appeared first on CDM Create Digital Music.

Wavehole Approach to Granular Synthesis Using Xenakis Screens

For those interested, see here.

See here for previous posts featuring Xenakis and granular synthesis.

What or rather who is Xenakis? Via Wikipedia: Iannis Xenakis – “Xenakis pioneered the use of mathematical models in music such as applications of set theory, stochastic processes and game theory and was also an important influence on the development of electronic and computer music. He

Sloo is the maddest, most swarming soft synth you’ve ever heard

When was the last time you just got lost in a synthesizer? Like, when you forgot everything else you were doing and just turned knobs and forgot what hour it was?

Well, if it’s been a while, you might want to try Sloo.

If you want, you don’t actually need to read any more. Just know that Sloo is a thing for Reaktor from Tim Exile, and it involves a gazillion oscillators, and it will make totally mental noises. It feels like someone has just heated up a giant, hot, steaming Jacuzzi of oscillators and you’ve jumped in and dunked your head.

Here, this is literally the first sound I managed to make with it, plugging it into Bitwig Studio (just for kicks), and then … uh … let’s say I’m sure I did something highly technical and sophisticated. I definitely didn’t just turn up a bunch of knobs and move the morph fader around. That suggests I don’t really know what I’m doing, and just mess with stuff like a toddler. Ha!

Sloo makes loads of other sounds, too. Here’s Tim Exile’s own demo, which gives you a taste of just how much synth-y synthness is in this synth, and how synthtastic that is:

Okay, so what is this thing, anyway?

Think of 48 independent voices – and imagine each voice has not just pitch as a parameter, but oscillator type, frequency modulation, and modulation via an LFO and mod matrix.

Now you can take all those parameters, and randomize clouds of voices around each directly. Where things get really interesting is moving the “morph” control – then these voices all “swarm” to a new destination, in denser or sparser clouds. (If you know the Swarmatron, that does something like this with just pitch. Now, imagine … a bunch of parameters.)

And that’s it, really. Sloo isn’t complicated, but that’s a good thing – the results are insanely varied. You can make fairly slick, conventional lead synth sounds or wild EDM-ish rhythms or psychotic Xenakis-style sound assaults or any number of dozens of whole genres lying in between. And you can actually decide to do all of those things and morph between them.

There are various other performances features: multiple scale settings (click on the different letters of the word “scale” – cute), pattern storage with live pattern switching, quantization, and an amplitude envelope (though that, too, is routed to the independent voices).

If you want to delve in still deeper, start clicking the ‘+’ icons scattered through the interface, and you’ll find multiple filter types, LFO modulation and routing, scale editing, fine-tuned pitch spread options, and other more granular controls. The whole instrument is indeed a lot more sophisticated that it may look at first blush, by design.

sloo_details

For getting hands-on, there’s Komplete Kontrol S-Series compatibility out of the box. I also found accessing parameters was really easy via Bitwig Studio, which mapped nicely to the parameters (and provided its own useful modulation and pattern control).

But — I can’t emphasize this enough — you don’t need to know what you’re doing. The randomization settings here are always at hand in the clock-looking icon at the top, and you can always dial through previous random settings.

At first, I thought that everyone would be all over a particular sound the synth makes – until I realized that was almost an utterly random occurrence, and it seems to make a bunch more, too.

Here are some examples from Herr Exile:

So, good grief, this is good. £39 / US$48 / €45 gets you a copy, though you’ll need Reaktor. (But I have to say, of all the software I advise people to invest in, at the moment Reaktor is way, way up on the top of the list.)

And Tim Exile, if you haven’t checked him out yet, is a rare bird himself, both a virtuoso electronic soloist who always gets everyone having a devilishly good time, and a master crafter of electronic inventions, largely by delving into the furthest depths of Reaktor.

Also, Reaktor experts – this patch is unlocked, meaning you can get into its powerful inner workings if you’re ready.

Have at it:

https://shop.timexile.com

The post Sloo is the maddest, most swarming soft synth you’ve ever heard appeared first on CDM Create Digital Music.

fluXpad is an insanely immediate music drawing tool for iPad

Make an interface simpler, and you might push your musical expression further. That’s the realization you have using fluXpad, a new drawing app. It’s not that it’s a dumbed-down rendition of other tools. It’s that doodling with sounds is a totally different experience than the point-and-click fine editing you might be used to.

fluXpad, out this week, is the latest app to translate the imagination of inventive, quirky music act Mouse on Mars into software. (Disclosure: I worked with them to developer their handheld effect instrument WretchUp. We were also really interested in immediacy, simplified gestural controls, and single-screen functionality.)

How to explain it? Let’s let a crazy puppet do it for us:

Of course, there’s a long history here. The dream of translating a drawing into sound directly is presumably as old as notation, and saw early implementations by the likes of Iannis Xenakis with his UPIC system (and later the deep Iannix system). Mouse on Mars’ own research started with tablets and dates back some years (also involving Florian Grote, who worked on WretchUp). But now it’s here, in a version anyone can use on the iPad.

The key here is getting everything on one screen. On that screen, though, there are different parts – each color-coded – so you can combine more than one sound.

0x0ss-2

Six of these are “melody” sequencers based on samples. These are re-pitched on the y axis, so you get cartoonish transformations of sound if you like.

One is a “percussion” sampler, using seven samples you choose.

So seven parts total, and each of those has seven patterns. You don’t get more than that, so doing an extended set with patterns will be a bit tricky. Then again, improv is part of the point.

You can draw in or play patterns. Unfortunately, there’s no velocity – one axis of the performance pad is wasted; it’d be nice to have finger position represent different velocities.

You can quantize or un-quantize patterns destructively. Quantization makes it really easy to get grooves going, but you aren’t restricted to the grid – though you are restricted to a fixed length. (If you want longer patterns, though, you can set an extreme bpm and just divide up the bar. You can’t mix pattern lengths, though, which would be nice.)

Inside, you get basic sample editing – so you can determine loop, envelope, and start and stop points.

0x0ss-1

The app is well-connected, too.

Getting audio in and out works with external audio or apps:
Record from the mic
Audiobus
IAA (Inter App Audio)
AudioShare
iTunes Library support (including sample import)
Export/backup (including recorded sounds, though you can only re-open the patterns in fluXpad)

Sync:
Ableton Link
MIDI clock

As with lots of apps these days, the developers assume users will want sound packs. So you get a bunch of sounds out of the box from Mouse on Mars and others, and more projects and sounds can be purchased in-app. I’m really curious just how well that business model is working – and whether it varies from app to app. In the meantime, though, I have to say I think recording your own samples may be the main draw.

And it looks insanely fun. I’d love to have more flexibility with rhythm/pattern length and velocity, and the ability to store more patterns, if there’s some way to do those things without sacrificing the one-screen layout. It’s also interesting to imagine what might be done with pressure data from Apple Pencil on the iPad Pro. But it’s already a compelling, lovely app. Check it out:

http://mominstruments.com/fluxpad/

Side note —

The history is itself interesting, drawing together [oops] some like-minded individuals with notions of how to do this:

Mouse on Mars hired Florian Grote who is now a part of the Native Instruments team, to work on an idea for a prototype app to draw and sequence sounds. With the success of Apple’s iPad it became clear that this tablet would be a perfect platform to bring the idea to a larger audience. At that time Mouse on Mars happened to meet composer and developer Jan Trützschler v Falkenstein via STEIM (Dutch institute for electronic music of which Werner had been the artistic director for a while).
Jan created several music apps such as Gliss and SQRT under his own label TeaTracks. He had been exploring similar ideas regarding the drawing of sounds during his PhD at the University of Birmingham which resulted in the iPhone app Gliss.
It made sense to them to team up and merge their ideas into fluXpad – the first intuitive drawing and sequencing music app.

More demos:

And for some insight into Mouse on Mars’ process:

The post fluXpad is an insanely immediate music drawing tool for iPad appeared first on CDM Create Digital Music.

How Music Can Predict the Human/Machine Future [re:publica Talk, Video]

This week, at Germany’s re:publica conference – an event linking offline and online worlds – I addressed the question of how musical inventions can help predict the way we use tools. I started all the way back tens of thousands of years ago with the first known (likely) musical instrument. From there, I looked at how the requirements of musical interfaces – in time and usability – can inform all kinds of design problems.

And I also suggested that musicians don’t lag in innovation as much as people might expect.

I thought about whether I wanted to post this as a video, as it’d be more structured if I wrote it as an article. But it occurs that some people might like to hear me talk off the cuff, “ums” and all, and those who did could provide some feedback. I really never give the same talk twice; I’m constantly revising my thoughts and part of the reason is being challenged by feedback. (Yes, as blogging may seem a solo monologue, in my experience it’s more like a feedback loop, not an echo chamber. Otherwise, I wouldn’t keep doing it.)

Full description:

http://13.re-publica.de/sessions/how-music-can-predict-humanmachine-future

From HAL to Wiimotes and Kinect, musicians have predicted the future of machine/human interaction. Because music connects with time, body, and emotion in a unique way, they test the limits of technology. Now it’s time to work out what comes next.

What’s going on here – how did musicians manage to invent major digital interaction tech before anyone else? Before the iPad, the first commercial multi-touch product was built for musicians and DJs. Before the Wii remote, musicians built gestural controllers, dating back to the early part of the 20th century. Before the moon landing, Max Mathews’ team of researchers taught computers to make music and sing, inspired HAL in 2001: A Space Odyssey, and may have even built the precursor to object-oriented programming. Music’s demands – to be expressive, real-time, and play with others – can test the limitations of technology in a way people feel deeply, and help us get beyond those limitations. Music technologist Peter Kirn will explore the history of these connections, show how those without any background in music can learn from this field, and examine how musicians may be at the forefront again, as they push the boundaries of 3D-printing, data mining, online interaction, embedded hardware, and even futuristic, cyborg-like wearable technology. Even if you can’t hold a tune, you may get a sense of how to get ahead of those trends – before HAL gets there first.

predictthefuture

Steinberg Padshop, Coming Soon, Granular Synthesis for the Rest of Us? Handy Intro Video Explains

Let’s get straight to it: granular synthesis, and the various processes based on the principle, is one of the coolest things about making music with computers. With the ability to take sounds and stretch, mangle, and reshape them into new textures, it’s one of the fundamental techniques allowing sound software and lots of terrific timbral techniques to work.

Of course, explaining it to lay people is a bit of a trick. So that’s why, even before we get into talking about Steinberg’s upcoming Padshop synth, it’s worth watching the first few minutes. Sound designer Matthias Klag explains that coolness really succinctly (and, I think, accurately).

I’m excited to try Padshop. Now, on its surface, I can’t yet see anything radically new in how it works relative to what you get from some of the better Reaktor patches out there. On the other hand, a lot of people aren’t willing to go buy Reaktor just to use those tools. And it seems Steinberg has built something that brings together a traditional synth’s playability with some of the better tools for dialing in far-out granular textures. We’ll get to see it later this month, and then see if this is as big a breakthrough for granular sounds as Steinberg says. But I think it’s worth an early look, nonetheless – if for no other reason than hearing this nice explanation.

And if I get one great pad for a track out of this, count me in. Time to stock up on some Fritz-Kola, in Hamburg’s honor.

Making Digital One-of-a-Kind: Inside Icarus’ Generative Album in 1000 Variations

Even the artwork changes. This is my personal copy – #148.

Digital: disposable, identical, infinitely reproducible. Recordings: static, unchanging.

Or … are they?

Icarus’ Fake Fish Distribution (FFD), a self-described “album in 1000 variations,” generates a one-of-a-kind download for each purchaser. Generative, parametric software takes the composition, by London-based musicians-slash-software engineers Ollie Bown and Sam Britton, and tailors the output so that each file is distinct.

If you’re the 437th purchaser of the limited-run of 1000, in other words, you get a composition that is different from 436 before you and 438 after you. The process breaks two commonly-understood notions about recordings: one, that digital files can’t be released as a “limited edition” in the way a tangible object can, and two, that recordings are identical copies of a fixed, pre-composed structure.

Happily, the music is evocative and adventurous, a meandering path through a soundworld of warm hums and clockwork-like buzzes and rattles, insistent rhythms and jazz-like flourishes of timbre and melody. It’s in turns moody and whimsical. The structure trickles over the surface like water, perfectly suited to the generative outline. At moments – particularly with the echoes of spoken word drifting through cracks in the texture – it recalls the work of Brian Eno. Eno’s shadow is certainly seen here, conceptually; his Generative Music release (and notions of so-called “ambient music” in general) easily predicted today’s generative experiments. But Eno was ahead of his time technically: software and digital distribution – both of files and apps – now makes what was once impractical almost obvious. (See also: Xenakis, whom the composers talk about below.)

You can listen to some samples, though it’s just a taste of the larger musical environment.

Fake Fish Distribution – version 500 sampler by Icarus…

12 GBP buys you your very own MP3 (320 kbps). Details:
http://www.icarus.nu/FFD/

The creators weigh in on the project for Q Magazine:
Guest column – Electronic band Icarus on whether algorithms can be artists?

The conceptual experiment is all-encompassing. Just to prove the file is “yours,” you can even use it to earn royalties – in theory. As David Abravanel, Ableton community/social manager by day and tipster on this story, writes:

“As a sort-of justification for the price, all Fake Fish Distribution owners are entitled to 50% of the royalties should the music on that specific version ever be licensed. A very unlikely outcome, but at least it’s sticking to concept.

I spoke with Ollie and Sam to share a bit about how the mechanism of this musical machine operates. Using Ableton Live and Max for Live, each rendition is “conducted” from threads and variables into a sibling of the others. The pair talk about what that means compositionally, but also how it fits into a larger landscape of music and thought. Of course, you can also go and just experience your own download (first, or exclusively) to let the music wash over you, an experience I also find successful. But if you want to dive into the deep end as far as the theory, here we go.

CDM: How is the generative software put together? What sorts of parameters are manipulated?

Ollie: The basic plan to do the album came before any decision about how to actually realise it, and we initially thought we’d approach the whole thing from a very low level, such as scripting it all in the Beads Java library that has been a pet project of mine for some time. But although we love the creative power of working at a low level, the thought of making an entire album in this way was pretty unappealing. We looked at some of the scripting APIs that are emerging in what you might call the hacker-friendy generation of audio tools like Ardour, Audacity, and Reaper, but these also seemed like a too-convoluted way to go about it.

Even though Max for Live was in hindsight the obvious choice, it wasn’t so obvious at the time, because we weren’t sure how much top-down control it provided. (As a matter of fact, one of the hardest things turned out to be managing the most top-level part of the process: setting up a process that would continuously render out all 1000 versions of each track.) Although it was quite elementary and unstable (at the time), [Max for Live] did everything we wanted to do: control the transport, control clips, device parameters, mix parameters, the tempo … you could even select and manipulate things like MIDI elements, although we didn’t attempt that. 

So we made our tracks as Live project files, as you might do for a live set (i.e., without arranging the tracks on the timeline), then set up a number of parametric controls to manipulate things in the tracks. Many of these were just effects and synth parameters, which we grouped through mappings so that one parameter might turn up the attack on a synth whilst turning down the compression attack in a compensatory way. So the parameter space was quite carefully controlled, a kind of composed object in its own right.

We also separated single tracks out into component parts so that they could be parametrically blended. For example, a kick drum pattern could be spilt into the 1 and 3 beats on the one hand, and a bunch of finer detail patterning on the other, so that you could glide between a slow steady pattern and a fast more syncopated one. So loads of the actual parameterisation of the music could actually be achieved in Live without doing any programming. Likewise, for many of the parts on the track, we made many clip variations, say about 30, such as different loops of a breakbeat. The progression through those clips — quantised in Live, of course — could also be mapped to parameters.

Finally, by parameterising track volumes and using diverse source material in our clips, we could ultimately parameterise the movement through high-level structures in the tracks. So we could do things like have a track start with completely different beginnings but end up in the same place. We did this in Two Mbiras, which is probably the track where we felt most like we were just naturally composing a single piece of music which just happened to be manifest it a multiplicity of forms. In that sense, this was the most successful track. Some of the other tracks involved more of an iterative approach where we didn’t have a clear plan for how to parameterise the track to begin with, but that fits with our natural approach to making tracks. At one point, we wondered if we could just drop a bank of 1000 different sound effects files into an Ableton track, to load as clips. To our glee, Live just crunched for a couple of seconds and then they were there, ready to be parametrically triggered. So each version of the track MD Skillz could end on a different sound effect.

The Max software consisted of a generic parametric music manager and track-specific patches that farmed out parametric control to the elements that we’d defined in Live. The manager device centred around a master “version dial”, a kind of second dimension (along with time), so you could think of the compositional process as one of composing each track in time-version space.

We used Emanuel Jourdan’s ej.function object, which is a powerful JavaScript alternative to the built-in Max breakpoint function object, and wrote some of our own custom function generators and function interpolation tools to interact with it. Using the ej.function object, we composed many alternative timelines to control the parameters, and then used the version dial to interpolate smoothly between these timelines, resulting in a very gentle transition between versions. I.e., version 245 and 246 are going to be imperceptibly different, but version 124 and 875 will be notably different (we quickly broke from our own rule and started to introduce non-smooth number sequences into some of the tracks, so for example in Colour Field two adjacent versions will actually have quite different structures). We spent some time making it well integrated into Live so that once we really got into the compositional process it would work smoothly and be generally applicable to all of the different ideas we wanted to throw at it. That said, it’s a few steps of refinement from being releasable software. 

Pictured: the master controller device, very minimal, just a version dial and a few debug controls. Double clicking on bp_gui leads to the other figure, a multitrack timeline editor, with generative tools for automatically generating timeline data using different probability distributions.

How did you approach this piece compositionally, both in terms of those elements that do get generated, and the musical conception as a whole?

Sam: Since 2005, we had been working a lot in the context of performance, not only as Icarus, but with improvising musicians through our label / collective Not Applicable. This is reflected in the records we put out both as Icarus and individually during that time, which increasingly used generative and algorithmic compositional techniques as structural catalysts for live improvisations. (As Icarus: CarnivalesqueSylt and All Is For The Best In The Best Of All Possible World. Individually: Rubik Compression Vero, Five Loose Plans, Nowhere, Erase, Chaleur and The Resurfacing Of An Atavistic Trait). Our performance software was made using Max/MSP and Beads and we started by crafting various low level tools that would loop and sequence audio files in various different ways, giving us control parameters that were devised around musical seeds we were interested in exploring.

In many respects, our approach was very similar and partly inspired by Xenakis’ writings in Formalised Music, although the context is obviously very different. These low-level tools were augmented by various hand-crafted MSP processing tools which used generated trajectories and audio analysis as a method of automating the various parameters that effected the sounds themselves, the logic being that an FX unit as a manipulator of sound is in some way loosely coupled to the musical scenario it is contextualised in. In both cases above, the idea was to step back from performance ‘knob twiddling’ by using the computer to simulate specific types of behaviour that would control these processes directly (hence the reason why we have never used controllers in performance).

Our search for different methods of coupling our increasing parameter space led us to develop various higher-level control strategies at Goldsmiths and IRCAM respectively, culminating in autonomous performance systems built in the context of the Live Algorithms for Music Group at Goldsmiths College. The autonomous systems we developed used a battery of different techniques to effect control, from CTRNNs and RBNs to analysis-based sound mosaicing, psychoacoustic mapping and pattern recognition. This work resulted in us being commissioned to put together a suite of pieces for autonomous software in collaboration with improvising musicians Tom Arthurs and Lothar Ohlmeier called “Long Division” for the North Sea Jazz Festival in 2010. The challenge of putting together a 45-minute programme of autonomous music really forced us to think more strategically about how it was possible to structure musical elements within a defined software framework and how they could vary not only within each individual piece, but also from piece to piece.

The most obvious inspiration for how we might do this ultimately came from reflecting on what it is we do when we perform live as Icarus. The experience of working up entirely new live material and touring it without formulating it as specific tracks or compositions proved to be an ideal prototype not only for Long Division, but also ultimately for FFD. Here, in a similar sense to the work of John Cage, large-scale structure and form became a contextually-flexible entity, which meant that for us it became to a far greater extent derived from the idiosyncrasies of the performance software we developed and keyed in by our own specific way of listening out for certain musical structures and responding to them in either a complementary or deliberately obstructive fashion (or perhaps even not at all). Creating these two pieces (‘Long Division’ and ‘All Is For The Best In The Best Of All Possible Worlds’) gave us the conviction that we could devise musical structures that were both detailed enough and robust enough to benefit positively from some level of automated control.

Therefore, when we came to start working on FFD, the main question we had to ask ourselves was; within the music making practices we had already been working with, what were the tolerances for automation within which we were still ultimately in control of and ultimately composing the music we were creating? In the end, the framework we set up was comparatively restrained; the generative aspect of each track was always notated as a performance via a breakpoint function and therefore able to be rationalised by us, the variation between different versions of the same track was done using interpolation and is completely predictable and incremental and finally, the entire space of variation is bounded to 1000 versions, meaning that the trajectories of the variation never extend into some extreme and unrealisable space.

More notes on the album:

Web: http://www.icarus.nu
RSS: feed://www.icarus.nu/wp/feed/

Last.FM: http://www.last.fm/music/Icarus
Discogs: http://www.discogs.com/artist/Icarus+(2)

SoundCloud: http://soundcloud.com/icaruselectronic
Twitter: http://twitter.com/#!/birdy_electric

Myspace: http://www.myspace.com/icaruselectronic
Facebook: http://www.facebook.com/pages/Icarus/132324596558

CREDITS

Music, Software, Scripting – Icarus (Ollie Bown and Sam Britton)
Mastering – Will Worsley, Trouble Studios
Artwork – Harrison Graphic Design

Icarus gratefully thank the following for their support of the FFD project

The PRSF Foundation (UK)
STEIM (Netherlands)
Ableton (Germany)
The University of Sydney (Australia)
Emmanuel Jourdan (France)

Music in Space and Time: Wild Geometries and Sequencing in Iannix, Free

Nerds: It’s an OSC sequencer. It’s JavaScript-programmable for making your own generative music. It works with hardware and other software. You can use it in real-time.

Everyone: it makes spectacularly strange sounds out of spectacularly beautiful flows of geometries through space.

IanniX, the latest-generation descendant of work done by pioneering experimental composer Iannis Xenakis, has been evolving at rapid pace into what may be the most sophisticated graphical sequencer ever. Xenakis originally had to content himself to drawing elaborate, architectural graphics on paper, then later being one of the first to use a graphical tablet for interactive scores. IanniX, backed by the French Ministry of Culture, is now barely recognizable even from more primitive versions that carried the same name. But the idea is the same: graphical geometries represent events in pitch and time, now sequencing other software (any software that can handle OSC or MIDI) to produce sound.

Free on Mac, Windows, and Linux, and now with growing documentation, IanniX can be seen producing the kinds of warped sounds Xenakis made in his music. But it is one of the first steps toward a graphical sequencer that could be used in all kinds of cases. And it’s free and open source under the GPL v3.

I’ve included some of the recent videos that show off what it can do. I especially like the recursive demo. But since it runs on your OS — well, unless you’re sticking to your beloved Atari ST or BeBox — you can just go grab it yourself.

http://iannix.org/en/index.php

My sense is that IanniX could have implications even beyond this software. Imagine a greater variety of music software that begins to work in spatial and graphical interfaces, not just the traditional piano rolls and linear tape-style arrangement views. And imagine that such tools, using protocols like OSC and MIDI, begin to establish common means of communicating with one another over a network. (OSC and, in particular, MIDI, are in need of some evolution to fully satisfy that. But these kinds of tools might be an ideal way to prod that very evolution.)

Speaking of prodding, thanks to Mark Birchall on Twitter for reminding me to write this up.

Now, if I can just find some hyperspace portal to additional space and time to play with this properly… there must be a productivity jump gate around here somewhere.

Artikulator ‘Finger-Painting’ Synthesizer Inspired By Xenaxis, Ligeti

Developers Mike Rotondo and Luke Iannini sent word that they’ve released a new iPad app, Artikulator. Artikulator is a multi-touch finger-painting synthesizer and music toy. But, while Rotondo & Iannini describe the app as a ‘music toy’, it also is designed to let you explore synthesis in an experimental way. Here’s what the developers have to […]