It is now possible to hear images.
It is now possible to hear images.
The story so far: The National Aeronautics and Space Administration (NASA) launched the James Webb Space Telescope (JWST) on December 25, 2021, and the first set of images taken by the powerful telescope was released in July 2022.
Ever since then, space enthusiasts have been in awe of the clarity and range of images taken by the JSWT.
Now, NASA is adding another layer to the experience through the sonification of the images and data gathered by JWST. “Music taps into our emotional centres. Our goal is to make Webb’s images and data understandable through sound – helping listeners create their own mental images,” Matt Russo, a musician and physics professor at the University of Toronto, was quoted as saying.
How does sonification of images and data work?
Our first example is that ofthe Cosmic Cliffs in the Carina Nebula.
JWST captured a stunning image of the Carina Nebula, which is 7,600 light years away. The JWST image is far clearer than any image captured previously.
According to NASA, the images are scanned from left to right. The image itself gives the appearance of a mountain range, complete with cliffs and valleys against a bright, moonlit sky. In reality, though, the image shows the edge of a huge, gaseous cavity within a star-forming region called NGC 3324 in the Carina Nebula.
The soundtrack representing the Cosmic Cliffs is vibrant, in sync with the image itself which depicts a bright landscape, the project description says. The top half of the image – mostly comprised of gas and dust – is represented through blue hues and more drone-like sounds. The bottom half – in shades of red and orange – is represented through comparatively clearer and melodic sounds. The sound becomes louder for brighter parts of the image, but that can change if a light is positioned differently in the image. If a bright spot is placed vertically higher in the image, it will be represented by a sound higher in frequency in comparison to a spot of similar brightness placed vertically lower.
A second example of sonification is that of exoplanet WASP-96 b.
JWST observed the atmospheric characteristics of the gaseous exoplanet, which shows clear signatures of water. The vertical axis represents the amount of light blocked, while the horizontal axis shows the wavelength of light, increasing in intervals of 0.25 microns. Longer wavelengths of light are heard as lower pitches because they have lower frequencies. The volume of the sound represents the amount of light detected in each data point, as observed in the video above. The water sounds have been represented by the sound of falling droplets.
NASA believes that visually-impaired people will benefit the most from this sonification project.
“These compositions provide a different way to experience the detailed information in Webb’s first data. Similar to how written descriptions are unique translations of visual images, sonification also translates the visual images by encoding information, like colour, brightness, star locations, or water absorption signatures, as sounds. Our teams are committed to ensuring astronomy is accessible to all,” Quyen Hart, senior education and outreach scientist at the Space Telescope Science Institute in Baltimore, Maryland, was quoted as saying.
Are these sounds recorded in space?
These sounds are, obviously, not recorded in space. Mr. Russo, along with musician Andrew Santaguida, collaborated to map the data collected by JWST to sound, “composing music to accurately represent details the team would like listeners to focus on”.