top of page
 Portfolio: Toshihisa Tsuruoka 

Audile's Dream

A Piece for Voice, Guitar, and Electronics

Composed by Toshihisa Tsuruoka

Poem by Ashley Muniz

Consider what it would be to dream in sound—sound sculpting a world and navigating you through its environment; words narrating your bodily presence and surrounding stimuli. Through the fusion of an electroacoustic sound environment and abstract poem, this composition provides a gateway into a surreal world. It is your imagination that will dictate the final shape of this dream.

The graphic score was animated in Max/Jitter to move in time with the fixed media track, allowing performers to be in sync with each other.

 

CLICK THE IMAGE BELOW TO VIEW THE SCORE! Please zoom in and scroll to see details. 

Audile's Dream Score.png

Research led by Toshihisa Tsuruoka

Poem by Ashley Muniz

 

Soundwriter was developed with the goal of enriching the oral storytelling experience through music. It implements a real-time hybrid system in which the emotional state of a given story is analyzed by its lexical features and auditory features. For example, emotionally salient words are given ratings based on arousal, valence, and dominance, while emotionally charged prosodic features of a speaker’s voice (pitch contour, speech rate, and intensity) influence the final classification of the story’s emotional state. The detected emotions then guide the music to dynamically reflect the story's progress. The system also introduced an interactive scoring algorithm which translated the detected emotions into musical figures for live musicians.

This research was a sister project to the Bookscapes project developed in collaboration with Dr. Tae Hong Park at New York University. Both research projects have been published via the International Computer Music Conference 2019. 

Research Paper - Generative Bookscapes: Towards Immersive and Interactive Book Reading || Published via International Computer Music Conference 2019

CLICK THE IMAGE BELOW TO VIEW THE RESEARCH PAPER!

Research Paper - Soundwriter: Real-Time Music Generation for Oral Storytelling through Emotion Mapping || Published via International Computer Music Conference 2019

Soundwriter Paper Presentation at ICMC 2019

Ear Talk project enables people from remote locations to collaboratively share, shape, and form music through an interactive score. The idea was inspired by our desire to share photos and videos on social media. However, by limiting the shareable content to merely sound, this project challenges participants to pay closer attention to the sonic qualities of our environment. The Max/Jitter hosted program “misuses” the Google ecosystem in order to collect audio files as they are shared on Google Drive, organize them into a visual score that is then live streamed on YouTube, and facilitate participants’ interactions with the score through the YouTube comment section via Google API. Through this process, Ear Talk challenges social media, where arbitrarily collected content fails to create one thread of meaning, by amalgamating sounds of different origins into one coherent piece.

 

This project has enabled disembodied collaboration and performances by the Consensus Ensemble in November 2019. 

Screen Shot 2019-09-22 at 9.14.55 PM.png

This research was accepted to the SEAMUS Conference 2020 as a part of Community-Engaged Performance and Workshop!!!

Screen Shot 2020-01-16 at 12.13.58 AM.pn

Research Paper - Ear Talk Project: Repurposing YouTube Live for Online Co-composition and Performance || ICMC 2020 submission in progress

Lilac

A Piece for Contemporary Ensemble

Composed by Toshihisa Tsuruoka

Performed by New York University Contemporary Music Ensemble

CLICK THE IMAGE BELOW TO VIEW THE SCORE!

Rokujo is a multichannel audio-visual installation for 6 vertical array monitors and 21 speakers

 

Installed at NYU Steinhardt May~August 2019

bottom of page