top of page
  • Writer's pictureIlona Kovacs

Complexity & Emergence

By focusing my research on psycho-ecoacoustics, the exploration will uncover the connection between auditory experience in hearing-abled persons and environmental awareness in the practices of surviving, protecting, and appreciating the environment. Collecting recordings of ecological soundscapes in the alpine regions of Colorado will provide many examples for clear comparisons, as well as the intersections, of the natural and built environments. Clearing sound clips of the man-made noise cluttering them will provide a pre-Anthropocentric environmental context that intensifies the understanding of the relationship between ecological acoustics and psychological effects. Access to environmental soundscapes free of anthrophony will be attainable with just internet capabilities to eliminate barriers preventing connections to nature and in turn having positive impacts on general wellness.

And man-made noise, as we invade ever more natural spaces, is a wildlife and ecosystem killer. As Les Blomberg, founder of the Noise Pollution Clearinghouse, put it: “What we’re doing to our soundscape is littering it. It’s…acoustical litter—and, if you could see what you hear, it would look like piles of McDonald’s wrappers, just thrown out the window as we go driving down the road.” [1]

Much like the brain completes visual stimuli based on historically collected information, auditory experiences can have hallucinatory profiles due to signal processing from both the brain and the sensing organs. Visual and aural senses can become cluttered, however, unwanted noise can effectively be eliminated with audio balances.[2] An example of this practice is suppressing tinnitus symptoms with additional noise, such as environmental soundscapes and white noise.


ADAPTIVE & GENERATIVE AUDIO

In using the field information collected about the temperature, barometric pressure and elevation, and the actual time frames to help frame these soundscapes, an emergence of adaptive audio usage makes for a real-life environmental awareness that is more similar to one in video games. “To date, the development of adaptive music systems (AMSs) for video games is limited both by the nature of algorithms used for real-time music generation and the limited modeling of player action, game-world context, and emotion in current games,” making casual listenability of generative music almost absent, but in experimentation trials, outside of the gaming industry.[3]

Auditory signifiers of sudden changes in the temperature and barometric pressure while in alpine regions can assist in decision-making during real-time explorations. This project aims to model these cue indications with the data collected during short-term field expeditions to capture psycho-ecoacoustic soundscape recordings in high-altitude areas around Colorado. While additional cues could potentially be provided by adapting user-made playlists to speed promptings for more extreme outdoor activities like skiing and mountain biking relating topographic changes with tempo adjustments as well as noise signifiers for things like speed checks, air-quality index, wildlife encounters, and natural disaster zones, that may be beyond the scope of this project’s timeline.

With the popularity of headphones in modern society, human listening has been greatly explored for commercialization and immersive listening emergence. More directly, one mainstream movement is finding ways to tackle the reproduction of the natural ambisonic experience in the convenience and portability of everyday Bluetooth headphones. Additional leaps in the headphone industry include head tracking, equalization, and sound scene decomposition to best render natural sounds for fully immersive listening.[4] Innovations are even further emerging with attempts to fully eliminate user interaction with devices altogether with sound-beaming technologies and in the syncing of ambient noise to moods, often for productivity purposes.[5] Applications that are leading the movement into fully integrating biometric and psychometric data into generative AI music have been Endel and Weav which both adjust the tempos of songs to their incoming information such as weather, location, heart rate, and steps. Endel more specifically experiments with sound masking techniques to perfectly play off of the existing distracting sounds, or noise pollution, in one’s environment to increase wellness by composing endless responsive soundscapes of music with artificial intelligence (AI). Weav works more to sync a user’s workout experience to music that already exists for the best possible experience by impacting motivation levels in things like running and skiing.[6] The ski coaching application, Carv, further advances these techniques in mainstream ways by using a device that can attach to a user’s ski boot for monitoring pressure and motion to enhance the customization of the audio feedback. The innovations are resulting in more adaptive options beyond auditory response, including integration with shoes in workout motivating vibrations much like an Apple Watch can remind you to stand.


AI SOUNDSCAPE EDITING

While there are many approaches to using AI to adapt audio, key systems to this research project include sound editing and sorting capabilities. On the everyday usage scale, a cell phone application, Warblr, is already fine tuning the AI needed to quickly and conveniently identify birds in the UK by their songs with crowdsourced data on an interface similar to Shazam.[7] This project will use algorithms to trigger recording devices to save specific, relevant sound clips over long periods of time using frequency detections to assist in the overall usability of the captured content. By focusing on notable pieces of sonic information, the effects of psycho-ecoacoustics will more strongly contribute to human behavior’s ability in assigning value to nature.[8] In recent news, the combination of indices and machine learning has been used in recording marine acoustics to monitor coral reef health with sound[9] showing that even ecosystems that are seemingly impossible to hear with the average-abled, naked ear can be tuned correctly for human auditory stimulation and response.

By working to combine natural soundscape with auditory signifiers of cues to temperature, barometric pressure, and others that the AI-influenced captured audio clips frame, environmental awareness can be sensorially stimulated in the ways presented of survival, protection, and appreciation.

[1] Beth McGroarty, “Wellness Music,” 2020 Wellness Trends, from the Global Wellness Summit (Global Wellness Summit, January 29, 2020), https://www.globalwellnesssummit.com/2020-global-wellness-trends/wellness-music/. [2] Steve Goodman et al., “The Auditory Hallucination,” in Audint Unsound: Undead (Falmouth, United Kingdom: Urbanomic, 2019), pp. 109-112. [3] Patrick Edward Hutchings and Jon McCormack, “Adaptive Music Composition For Games,” IEEE Transactions on Games 12, no. 3 (September 2020): pp. 270-280, https://doi.org/10.1109/tg.2019.2921979. [4] Kaushik Sunder et al., “Natural Sound Rendering for Headphones: Integration of Signal Processing Techniques,” IEEE Signal Processing Magazine 32, no. 2 (March 2015): pp. 100-113, https://doi.org/10.1109/msp.2014.2372062. [5] Ilona Kovacs, “Designed by IKO,” Designed by IKO (blog), March 15, 2022, https://ilonako.wixsite.com/portfolio/post/music-delivery-revolutions. [6] Beth McGroarty, “Wellness Music,” 2020 Wellness Trends, from the Global Wellness Summit (Global Wellness Summit, January 29, 2020), https://www.globalwellnesssummit.com/2020-global-wellness-trends/wellness-music/. [7] Matthew Hutson, “Watch out, Birders: Artificial Intelligence Has Learned to Spot Birds from Their Songs,” Science, July 18, 2018, https://doi.org/10.1126/science.aau8247. [8] Almo Farina, “Ecoacoustics: A Quantitative Approach to Investigate the Ecological Role of Environmental Sounds,” Mathematics 7, no. 1 (December 26, 2018): pp. 1-16, https://doi.org/10.3390/math7010021. [9] Ben Williams et al., “Enhancing Automated Analysis of Marine Soundscapes Using Ecoacoustic Indices and Machine Learning,” Ecological Indicators 140 (July 2022): p. 108986, https://doi.org/10.1016/j.ecolind.2022.108986.

name logo
e-low-nuh's wərk
bottom of page