FIG. 1: The scoreLight system uses a sophisticated laser scanner to sence shapes and control a sound-generating computer, simulating the experience of synesthesia.
I''ve always been fascinated by the concept of synesthesia, an anomaly of human perception whose name derives from the Greek for joined sensation. For those who experience it, stimulating one of the five senses—sight, sound, smell, taste, or touch—causes a distinct perception in one or more of the other senses. For example, a synesthete might hear the color red or taste the touch of leather on their skin.
Among the famous historical synesthetes was Russian composer Alexander Scriabin (1872-1915). In an effort to express his own synesthetic experiences, Scriabin often used “light organs” to control beams and clouds of colored light during performances; he even experimented with wafting scents through the audience to coincide with specific moments in the music. Unfortunately (or fortunately, depending on your point of view), he died before completing his magnum opus, Mysterium, a seven-day-long piece to be performed at the foot of the Himalayas in India, after which he believed the world would dissolve in bliss.
Scriabin would have been enthralled with research now being conducted at the University of Tokyo''s Ishikawa Komuro Laboratory, where scientists are experimenting with a system called scoreLight that combines sight and sound to simulate synesthesia. Their purpose is not primarily musical, but rather, as they put it, “to research methods for capturing and manipulating information that is normally inaccessible to humans and machines. In doing so, [they] hope to create new ways of perceiving the world and interacting with technology.”
The scoreLight system uses a sophisticated laser scanner with one or more laser diodes, a pair of steering mirrors, and a non-imaging photodetector. The laser diode scans an area in which users can draw shapes and place objects that reflect different amounts of light back to the photodetector (see Fig. 1). Data from the photodetector are used by one computer to steer the laser beam and by another computer running SuperCollider and Cycling ''74 Max/MSP, two sound-synthesis software platforms, to generate sound.
Several modes of operation are currently under development. For example, the angle of lines can control pitch, generating a melody whose tempo is determined by the perimeter of a closed shape; rotating the image transposes the melody to a higher or lower pitch level. In another mode, pitch is modulated as a function of the curvature of the lines, and abrupt corners are used to trigger specific sounds such as percussion hits. Also, the laser beam can be made to bounce between two lines, creating a rhythmic pattern. The system can even scan three-dimensional objects and encode differences in texture.
One of the hallmarks of the scoreLight system is its feedback mechanism. The reflected light and generated sound can be used to control where the laser beam goes, which in turn controls the sound. Another important feature is its translation of light into sound, the exact opposite of how more common systems generate graphic images in response to sound. Of most interest to me is how scoreLight simulates synesthesia, allowing users to hear drawings and see sounds.
The artistic possibilities are endless. For example, the laser could scan a dancer''s clothes, converting his or her movements into sounds that correspond to the dance. Certain patterns can be drawn on movable objects, which would then be placed to form a composition, much like the reacTable and d-touch systems profiled in “Tech Page” in the February 2007 and November 2009 issues, respectively. One particularly interesting application would be to use a powerful laser to scan buildings in a cityscape, allowing you to hear what the city looks like. All in all, this is an intriguing technology that I look forward to seeing/hearing in the future.