Tag Archives: ableton

Baker’s Yeast Sonification

I’m currently completely wound up in stressing about my degree show – which is on the 19th June here, for anyone who wants to come (it’s entirely public) – but managed to fit in an entry to this competition. The task was to sonify the coding sequence of a gene taken from Baker’s Yeast. Very interesting. Unfortunately some sort of technical problem means that my entry hasn’t appeared on their website yet :-/ I’ve made two versions. One is less manipulated, the second used extra processing of MIDI signals to modulate parameters – feat. jiggerypokery by Fred Baker.

Version 1

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Version 2  (feat. Jiggerypokery by Fred)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Here’s my desription of concept;

This piece explores the intricate thought that is required to comprehend how such simple representation – using the letters A, T, G & C as symbols – can actually contain the instructions for life, any life, to exist (even something as humble as Baker’s yeast). Taking the coding sequence as an independent entity, I’m trying to expose the inherent simplicity, but use sonic aesthetics to be suggestive about implicative complexity

.. and method;

I recorded myself speaking A, T, G & C. Then I wrote a simple program to send MIDI messages corresponding to the coding sequence, into Ableton Live, where the sequence was recorded- forming the core of the piece.

Post production involved manipulation of the pitch and timing of the sequenced samples. An additional percussion track and effects sends add depth. Plogue Bidule was used to manipulate MIDI signals which modulate parameters in Ableton Live.

The meter quickens throughout the length of the piece, building to a crescendo at the end.

This is the coding sequence that I used;

ATGAGTAGTTTGTGTCTACAGCGTCTTCAGGAAGAAAGAAAAAAATGGAGAAAGGATCATCCATTTGGATTTTATGCCAAACCAG
TTAAGAAAGCTGATGGGTCCATGGATTTACAGAAATGGGAAGCTGGTATCCCAGGCAAAGAAGGTACAAACTGGGCGGGTGGTGT
GTACCCAATTACAGTCGAATATCCAAATGAATATCCTTCAAAACCTCCAAAGGTTAAATTTCCAGCCGGATTTTATCATCCAAAC
GTGTATCCAAGTGGCACAATATGTTTAAGTATTTTAAATGAAGATCAAGATTGGAGACCCGCCATCACGTTAAAACAAATTGTTC
TTGGGGTTCAGGATCTTTTAGACTCTCCAAATCCAAATTCCCCTGCTCAAGAGCCTGCATGGAGATCATTTTCAAGAAATAAGGC
GGAATATGACAAGAAAGTTTTGCTTCAAGCTAAACAGTACTCTAAATAG

Sonification is really exciting. Hope you enjoy it.

Data Sonification

I’ve been thinking about Sonification (has a definition on Wikipedia, even if it isn’t a real word) of data for sometime. My first experiment used this blog. A hidden iFrame on the main page opened a page on a web server running off my laptop. These requests were then converted to audio and played out of the laptop’s speaker. I’m interested in the concept of listening to data all together, and in particular web / internet data is especially intriguing. Similarly I’ve recently become inspired by autonomous art works, things that do their own thing without intervention, and even better than that they do something entirely unpredictable.

My research has progressed, and now rather than using a simple PC speaker my software outputs MIDI information. That can then be interpreted by all means of other software, or even hardware synthesisers to actually turn the data into sound. Also I’ve stopped using this blog as the data source, I’ve actually obtained a months worth of web server log data. This has given me about 7 million records to use as my data set. Due to the way the software processes data the amount of time that it would take to “play” is equal to the amount of time that data took to collect. So, the months worth of data I’m using currently would actually take 1 month to listen to. Here’s a few excerpts from my current set up.

Example 1

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Example 2

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Example 3

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Example 4

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

The “rules” that the software adheres to are as follows.

  • MIDI note is determined by an addition of each segment of the clients IP address, which is then divided by 8. The reasoning behind this is that there are 128 possible MIDI notes. The sum of IP address segments has 1024 different possibilities. 1024 / 8 is 128; so this calculation will always provide a valid MIDI note.
  • Length of note is determined by looking at the length of time in between web requests. This means in busy periods the software produces lots and lots of notes, whereas at quiet times (in the middle of the night) very few notes are played.

In time I’d like to further develop the system, exploring using other things as parameters with which to modulate aspects of the synthesis. One idea is to look at the geographical location of the client and then alter the sound accordingly in some way. This could work very well with a multiple speaker set up. Also plugging the system into the live log data would be really exciting.

A further development would introduce a visual element to the software. Illuminating variou screen sections according to IP address processing, like the audio. I haven’t looked into that as yet though..! It would probably mean transferring the MIDI processing code from VB into Processing; no bad thing me thinks.