Evolving Sonic Environment III
with Robert Davis, Psychology Department, Goldsmiths College, London.
Evolving Sonic Environment III is an acoustically-coupled analog neural network, consisting of a society of devices whose behaviour collectively changes in response to the pitch ascendancy or descendency that each one detects. In contrast to earlier versions of the project (which operated at much higher frequencies), humans will be able to participate more directly in the adaptation process by making sounds of their own.
Each device can output at any one time a rising and/or descending tone: however, if a device hears too much of one type of tone it may get 'bored' and slowly modify its behaviour. On the other hand, they may all coalesce in an equilibrium where they are all 'content' with the state of pitches in the room. This 'contentedness' may get disrupted when humans enter and start making their own sounds, thus perpetuating the evolving acoustic characteristics of the space.
The system will remain active for the entire duration of its installation at the Netherlands Media Art Institute, so there will be many Gigabytes of data for analysis which, it is hoped, will demonstrate that adaptation has occurred over both short term and long term occupancy of the space. If this is so there should be correlations between occupancy and acoustic spectrum patterns that may change over the weeks.
Archived description (Evolving Sonic Environment I & II)
The project consists of two embodiments: a society of sonic devices distributed in a room and a mechanism for recording and reviewing the history of the population. It is hoped that the collective behaviours of the devices will be affected by the way that the room is occupied (by people or other mobile objects) and, as such, the room will develop a "perception" of its occupancy. One might say that the society of devices together function as a "people sensor", though there are no "people sensing" functions built into the individual devices.
Sonic "neurons" suspended in space; neurons designed and built by Robert Davis; observing the room's "observation"
Drawing on the work of Gordon Pask, Donald Hebb and Andrew Adamatzky, the project is an architectural experiment to investigate how one might construct an interactive environment that builds up an internal representation of its occupants through a network of autonomous but communicative sensors.
The "society" of sonic devices
The "society" of sonic devices are distributed regularly but directed randomly in the space. They function like simple neurons, cascading during high activity, altering their thresholds during periods of low activity and becoming apparently "bored" by repetitive inputs. Inputs and outputs consist of high frequency sound, 14KHz to 16KHz, near the upper limit of human hearing; this is necessary to improve the directionality of the sound. The devices are constructed chiefly from analog components and therefore are not "programmed" in the conventional sense to exhibit particular properties. When they have received sufficient input energy (which depends on their particular input state at the time) they themselves "fire", with a continuous sound of varying frequency.
Device suspended from ceiling; both speaker and mic may be directed as needed; realtime and historical frequency map of the room
Entering the space people experience a constantly shifting environment of sound; the individual frequencies employed by each device are particular to that device and they are inconsistent throughout the population; they each vary as they each strive to find their own equilibria. By moving his/her head a person can hear varying maxima and minima of the acoustic waves as the sound outputs produce tartini tones and constructive and destructive interference patterns; but this movement also disrupts the direct transmission of sound from one device to the other and affects the way that they relate to one another.
Observing the room's observation
In order for us, as external observers, to get a glimpse into the changing states of the room (and in order to "observe" the room's "observation") we have two possible points of entry.
1. The most straightforward method is to enter the room and listen to the devices "talking" with each other through high frequencies. However, entering the room affects the communication paths of the devices and therefore alters the internal state of the room, particularly sonically - a reference to Heisenberg: our observation of the room from the inside determines the behaviour of that which we are trying to observe; there is no "objective" observation. The high frequency sound creates varying maxima and minima throughout the room; people obstruct communication paths and interrupt the "conversations" being carried out between devices. Each device has an infrared LED that indicates the current activity level: though not visible to the naked eye, this may be inspected using a mobile phone's camera.
Sound paths are altered by people in the space; their presence may obstruct or absorb the sound and so the neurons fall into changing resonance patterns as they respond to their changing inputs
2. An alternative method of experiencing the changing states of the room analogises the process of EEG recordings of the brain: audio from the population, shifted down 8 octaves in realtime to comfortable human hearing range, is provided in a second corresponding room. This includes visualisations of the sound as well as a visualisation of movement as sensed by a camera that is positioned in the other room. It provides a different observation of the room which alters depending on how the room is occupied, how frequently, by how many people and in which locations people tended to remain. Of course, a further loop of observation-participation is created when we use our ears to make distinctions on this recorded audio...
Video documentation, Quicktime: 8Mb file
The history of the combined frequency outputs indicate that the devices settle in different resonance patterns based on how the room is occupied (number of people, length of occupancy, length of emptiness, etc.). This occurs regardless of their orientation; the pattern that emerges when the room has been left unoccupied for a long period of time (several hours) is shown on the right.
A future version will incorporate other forms of communication (including puffs of air and thermal infrared).
Further information available in this text by Rob Davis.
ESE Part 1 appeared at Threshold '06, E:vent, London, as part of Node.London, March 3 - 9,2006.
A more developed version was installed at Emocao Art.ficial - Cybernetic Interfaces, Sao Paulo, Brazil July - September 2006. It then appeared at NTT Intercommunication Centre, September to November 2006.