Computer-generated music has been an interest of mine since dissing it whilst at university in 1999, where I took a course called AI a and Creativity with the lovely and inspiring Maggie Boden OBE. However, despite the clever psuedo-music and clichés of Garage Band and Baysian networks, I find the attempt at human-sounding music to be depressingly emotionless, their lack of lyrics and lyricism removing them far even from the song machines of Orwell's 1984.
But I have always been fascinated by the idea (shared by Kandinsky, Klee, Victor Pasmore to name three of my faviourites) that paintings and images could be rendered as music, so when I cam across the extremely simple Lindenmayer System, the music application was obvious and irresistible, and the first prototype created within a day.
The appeal of the equation, generally used to model plant growth, is its simplicity, as this rough graphing demo illustrates. Without simulating varying soil, light, and weather conditions, growth is unnaturally regular, but the instantly familiar shapes produced by equations of as few as four bits encourages me to think Lindenmayer and co were closer to discovering some secrets of creation than anyone. But that's another subject.
As for the 'musical' output, it depends on your taste, and how you imagine a plant might sound. To my ear, it is often reminiscent of Terry Riley's dervish music, and thus the to Acton's own The Who's tribute to Terry Riley (and Avatar Meher Baba), and thus Talking Heads' Once In A Lifetime.
My current system is a mixture of HTML5, MooTools (for ease of OO),
with the binary midi files being encoded by old-fashioned Perl with Moose (using
kindly dontated to the world by Sean M Burke). The resluting MIDI file
is the imported into Logic Pro, and mastered through a Yamaha 01X and a variety
of synths and samplers.
However, my work has been moving towards landscapes of
Lsystem 'plants,' which would push my current paradigm to the limit,
so a rewrite in Java is planned for next year.
This would also allow live alteration of a loop, which could
be interesting for performance.
In the past year or so, I have been focusing my free time on using and improving the Node/HTML5 L-systems audio, described above. Personal projects include Freedharma (Backbone), an SPWA to create video subtitle files in real-time (Angular), Plonk, a (Web Sockets/Audio) rip-off of Plink, a collaborative music doodling tool, a bit like the silly brush toy which you can see in the Zen section above. From the server's point of view, it's just a chat server, for the client is a kind of musical etchasketch. It does demonstrate the power of WebSockets and weaknesses of the HTML5 canvas element. Perhaps I'll write an article. I'd let you see it, but what ISP supports WebSockets?
When Mozilla eventually completely fixe their HTML5 audio looping bug, I hope to continue development of a simple HTML5 multitrack that allows the mixing and looping, cutting and pasting, of SoundCloud and other online audio files, in the style of Logic/Cubase.
Enso 2015-07-04/01 was created in Photoshop using a Mac Book Pro track pad. Photoshop is not just about airbrushing products and models.
Avoid liability: why not encrypt them and have the client store them for latest display and stateless use of the serer? Probably because all forms of local storage are an out-of-sync mess (no pun intended). Perhaps Mozilla’s localforage is the answer? Though not through its messed-up Backbone ‘driver’.
Ported the core of my L-system work to user Browserify through CommonJS — much nicer API than AMD/requirejs. Not sure I’m keen on exposing node_modules, though.
As a base for the on-going habitat soundscape program, I’ve AMD-wrapped my PCM visualiser of some years ago, and given it a place in bower, too. Mainly so I could easily subclass to add a spectragraph, too. Github.
This verseion of Conway's Game Of Life works in 3D, but I've not yet had time to find some goods seeds.
I'm curious if the shapes can make interesting sounds.
(Drag, click-click, to control pan and zoom.)