Acoustically-coupled neural network that learns about its environment
Evolving Sonic Environment, a collaboration with Rob Davis, Psychology Department, Goldsmiths College, is an architectural experiment to construct an interactive environment that builds up an internal representation of its occupants through am coustically-coupled neural network.
There were several versions of the project. ESE Version 1 appeared at Threshold '06, E:vent, London, as part of Node.London, March 3 - 9,2006. A more developed version was installed at Emocao Art.ficial - Cybernetic Interfaces, Sao Paulo, Brazil July - September 2006. It then appeared at NTT Intercommunication Centre, September to November 2006. Version 3 was installed at Netherlands Media Art Institute in 2007 and later at the Emergent Technologies Pavilion at the 2nd Universal Forum of Cultures in Monterrey City.
The project consists of two embodiments - a 'society' of sonic devices distributed in a room; and a mechanism for recording and reviewing the history of the population of devices installed in another room. The aim is to show how the collective behaviours of the devices are affected by the way that the room is occupied (by people or other mobile objects) and how the room develops a representation or "perception" of its occupancy. One might say that the society of devices together function as a "people sensor", though there are no "people sensing" functions built into the individual devices.
Drawing on the work of Gordon Pask, Donald Hebb and Andrew Adamatzky, the project is an architectural experiment to investigate how one might construct an interactive environment that builds up an internal representation of its occupants through a network of autonomous but communicative sensors.
The "society" of sonic devices are distributed regularly but directed randomly in the space. They function like simple neurons, cascading during high activity, altering their thresholds during periods of low activity and becoming apparently "bored" by repetitive inputs. Inputs and outputs consist of high frequency sound, 14KHz to 16KHz, near the upper limit of human hearing; this is necessary to improve the directionality of the sound. The devices are constructed chiefly from analog components and therefore are not "programmed" in the conventional sense to exhibit particular properties. When they have received sufficient input energy (which depends on their particular input state at the time) they themselves "fire", with a continuous sound of varying frequency.
Entering the space people experience a constantly shifting environment of sound; the individual frequencies employed by each device are particular to that device and they are inconsistent throughout the population; they each vary as they each strive to find their own equilibria. By moving their head a person can hear varying maxima and minima of the acoustic waves as the sound outputs produce tartini tones and constructive and destructive interference patterns; but this movement also disrupts the direct transmission of sound from one device to the other and affects the way that they relate to one another.
Observing the room's observation. In order for us, as external observers, to get a glimpse into the changing states of the room (and in order to "observe" the room's "observation") we have two possible points of entry.
1. The most straightforward method is to enter the room and listen to the devices "talking" with each other through high frequencies. However, entering the room affects the communication paths of the devices and therefore alters the internal state of the room, particularly sonically - a reference to Heisenberg - our observation of the room from the inside affects the behaviour of that which we are trying to observe; there is no "objective" observation that doesn't affect that which is being observed. The high frequency sound creates varying maxima and minima throughout the room; people obstruct communication paths and interrupt the "conversations" being carried out between devices. Each device has an infrared LED that indicates the current activity level - though not visible to the naked eye, this may be inspected using a mobile phone's camera.
2. An alternative method of experiencing the changing states of the room analogises the process of EEG recordings of the brain - audio from the population, shifted down 8 octaves in realtime to comfortable human hearing range, is provided in a second corresponding room. This includes visualisations of the sound as well as a visualisation of movement as sensed by a camera that is positioned in the other room. It provides a different observation of the room which alters depending on how the room is occupied, how frequently, by how many people and in which locations people tended to remain. Of course, a further loop of observation-participation is created when we use our ears to make distinctions on this recorded audio...
The history of the combined frequency outputs indicate that the devices settle in different resonance patterns based on how the room is occupied (number of people, length of occupancy, length of emptiness, etc.). This occurs regardless of their orientation; the pattern that emerges when the room has been left unoccupied for a long period of time (several hours) is shown on the right.
Evolving Sonic Environment III is an acoustically-coupled analog neural network, consisting of a society of devices whose behaviour collectively changes in response to the pitch ascendancy or descendency that each one detects. In contrast to earlier versions of the project (which operated at much higher frequencies), humans will be able to participate more directly in the adaptation process by making sounds of their own.
Each device can output at any one time a rising and/or descending tone - however, if a device hears too much of one type of tone it may get 'bored' and slowly modify its behaviour. On the other hand, they may all coalesce in an equilibrium where they are all 'content' with the state of pitches in the room. This 'contentedness' may get disrupted when humans enter and start making their own sounds, thus perpetuating the evolving acoustic characteristics of the space.
The system remained active for the entire duration of its installation at the Netherlands Media Art Institute, generating many gigabytes of data for analysis which showed that adaptation occurred over both short term and long term occupancy of the space, and that there were correlations between occupancy and acoustic spectrum patterns that changed over the weeks.