Web Portfolio

The Music and Sound Art of Anthony T. Marasco

This web portfolio serves as a small collection of the compositions, installation works, academic research, and collegiate teaching experience of Anthony T. Marasco. These works were created over the past four years and directly correspond with my artistic interests and endeavors as a composer, sound artist, instrument builder, academic, and educator. 

Works for Ensemble

A strong facet of my work as a composer and sound artist is comprised of works for various acoustic and electro-acoustic ensembles. Below you will find examples of recent works for ensemble, along with audio recordings and copies of their respective scores.

Mid-Century Marfa

for toy piano, plastorgan, Totem harp and electric fans

Winner of the UnCaged Toy Piano Festival's 2013 call for scores

Commissioned and premiered by Phyllis Chen

Inspired by one of the American Southwest’s most culturally significant and mysterious locations, Mid-Century Marfa paints a sonic portrait of the small town of Marfa, Texas and comments on the significant role it has played in popular culture, experimental artistic circles, and rural legacy. A town oft depicted in both vintage photographs and Instagram facsimiles, Marfa and its people consistently defy convention and somehow exist straddled between the Past and the Future, but nowhere near the Present.

Exploring the musical conventions and sonic extremities of cowboy lullabies, windswept plains, mid-century Texas rock ‘n’ roll, and contemporary experimental harmonies, Mid-Century Marfa provides a unique performance experience for adventurous toy pianists/multi-instrumental performers.

This piece requires the use of two instruments handmade by the composer: a plastorgan (built from a collection of plastic bottles with vertical slits carved into their bodies) and an aeolian harp (constructed out of an aluminum downspout and nylon fishing line. Both of these instruments are powered by electric fans. Pictures of these instruments can be found in the gallery below.



for iPad (Curtis granular synthesis app), four circuit-bent radios, and computer (6-channel audio playback)


Inspired by the Cantiere series of paintings by Italian artist Walter Trecchi, Weld is a multi-movement, improvisatory piece for iPad, circuit-bent radios, and prerecorded audio. The audio track is mixed as a 6-channel audio file, sending two channels through the house audio system and the remaining four channels through individual radios placed on stage around the performer. The iPad performer uses the Curtis granular synthesis app to manipulate a prerecorded sound file that contains raw samples of various sounds found throughout the backing audio track. The player manipulates this audio file by running their finger over the waveform displayed on the iPad’s screen, while also changing effects and sampling parameters such as reverb, delay, and grain size. Since the iPad performance is entirely improvised, there is no traditional score provided. An Audio Cue Sheet for the first movement only is included with these notes due to the dense textural layout of that movement. For the second movement, all iPad performance gestures, as well as circuit bends moments, are designated by the performer. 



for vocal quartet (SSAA), narrator, postcards, and computer (2-channel audio playback)


Commissioned and premiered by the Quince Contemporary Vocal Ensemble for their Fall 2012 Midwest Tour

In order to emulate the story’s focus on memory and the recurrence of unpredicted actions, each of the four vocalists are tasked with repeating cells of melodic and percussive material as many times as they’d like across a designated span of time. While many of these actions are unsynchronized, the piece calls for a few moments of unification between the four vocalists, resulting in swelling chords and moments of punctuated, parallel-motion voice leading. In addition to the repetitive approach to vocal performance, the prerecorded audio track includes skipping, glitching samples of organ drones and melodic figures—emulating a skipping mixed CD that would have surely been sent between these two characters as a slightly ironic romantic gesture—made with the Chocolate Grinder software designed by Rodrigo Constanzo. The story’s focus on phone and digital means of communication vs. physical postcards and letters also makes it way into the audio track in the form of electronic voltage sounds recorded with a teletap coil microphone being run over my laptop’s spinning disc drive, battery pack, and motherboard (you can see video of this process in action by heading here).

Communiqué also includes elements of theatricality that directly connect to  the narration. In the middle of the piece, each vocalist takes a turn pulling a “prepared” postcard from a randomly shuffled pile and performs the musical passage attached to the back of it. While the repeated cells of musical material throughout the majority of the piece are to performed with minimal expression and are short and precise in nature, these postcard passages are highly expressive and allow the vocalists to sing with a more operatic, soloistic approach in order to emulate Karen’s romantic yet cryptic confessions of love towards our male narrator. The juxtaposition between repetitive, mechanical performances and emotionally stirring, old-world operatic soloistic performances comments on Communiqué’s main discussions of digital vs. analog communication and carefully planned, rehearsed conversation vs. visceral, emotional confessions.



Ultraviolet Cleopatra

for flute, clarinet, electric violin, electric bass, percussion, computer (2-channel audio playback), and six-voice drone synth (with gong resonator speaker)

Pulling influence from Classical Era film scores, Spectralism, and Postmodern pop rock, Ultraviolet Cleopatra, is a sonic meditation on strong women, Chillwave electronica, and the historical through line connecting Pop Culture Empresses of the past and present. Throughout the piece, the ensemble alternates between performing aleatoric, quasi-improvisational cells and scored material, overlapping impromptu abandon and strict decorum. A six-voice synthesizer shifts and slides through diatonic harmonies in its bass voices with a humming buzz reminiscent of fading neon lights as its treble voices—sent through custom built surface transducer speakers—are amplified through gongs to create a cloud of shimmering overtones. A heavily processed electric violin creates echoing, pitch-delayed melody lines as if carefully tracing worn hieroglyphs on stone or somberly singing to a dazed crowd of thousands. The backing audio track recontextualizes us in pop-cultural time as the immortal words of Ancient Egypt’s Last Pharaoh lead us to the co-mingled strains of Virginia Hensely and Lizzie Grant, stretching like an aural tablet for the ensemble to carve their final remarks into.


Works for Interactive Computer Systems/Digital apps

Another focus of my work centers on creating interactive computer systems and interactive apps where non-traditional performers and/or audience members can create complex works of sound art through their participation. Below you will find two examples of recent works for interactive computer systems/digital apps, along with audio or video recordings. Copies of their respective scores or performance materials are also included where applicable.

Out of Conte#t 

an interactive piece for live Twitter data and hashtag-assigned musical material

Commissioned by WIRED Magazine and published in the June 2013 issue of The Connective, a digital-only special issue. Available for download in the WIRED Magazine Hub for tablet devices (iPads, Nooks, Android devices). Click here for details on how to download this issue.

Screen capture of Out of Conte#t UI in WIRED magazine's May 2013 "The Connective" special issue

Out of Conte#t allows users to create their own unique sonic landscape by interacting with hash tags outside of a tweet-based context, much as we are already becoming a custom to doing by adding them into everyday conversation online or in person.

The ten hash tags listed on the feature's page were selected from the comment thread of a recent Facebook post which asked my friends to list their favorite hash tags that they use in everyday conversation and text messages, not just on Twitter. I then used the SoundPlant software program (developed by Marcel Blum) to map fragments of sound (bouncy synth notes, drones, metal mixing bowl percussion, warped fragments of ambient sound, etc.) to the letters on my QWERTY keyboard, turning it into a digital orchestra of electronic sounds. By typing out the words that comprise each hash tag, I was able to compose and record individual lines of melody that are then layered on top of the background music whenever the digital app detects a tweet that includes the corresponding hashtag.


Installation Works

As a sound artist, my interest in creating interactive and engaging sonic art environments extends to the creation of installation works.  Below you will find an example of some recent sound art installations, along with a video recording. Copies of the performance materials are also included.

For the Man Who Has Everything (2015)

An interactive installation for one or two participants

Created during residencies at Signal Culture and the Montreal Contemporary Music Lab, Premiered at Eastern Bloc, Montreal, June 2015

(MIDI Sprout biorhythmic sensor, conductive headband, tactile sensors, four-channel sound, video, custom software, Ableton Live)

How do our interactions with other people resonate in our minds? Can strangers or loved ones accurately understand the intent of your actions towards them when outside circumstances or unknown factors often warp their meaning? How can we be sure that we are helping someone if we do not know how they interpret the concept of “help”? This installation provides two stations of interaction: one that is passive, and one that is active. Each role you play will yield different reactions from the generative software, and when working in tandem with someone else, each of your actions will affect the other’s, resulting in an ever-changing audiovisual environment.

A wall, allowing no visual contact between the participants, separates the stations. At one station, a participant is presented with a looping audiovisual environment and a headband, which is connected to the MIDI Sprout biorhythmic sensor. When the headband is worn, the participant’s galvanic skin response is measured, converted into a stream of MIDI data, and analyzed by the custom Max/MSP software. The results of the live MIDI stream analysis cause the software to shuffle, loop, or stutter through a collection of video clips, as well as control audio modulations on the backing drone. Once removed, the environment returns back to its “null” state.

On the opposite side of the wall, a participant is presented with a stark environment, populated only by a tactile-sensor control box. Running a finger along the circular sensor and pressing the buttons results in a Markov-model-based performance of percussive tone colors (non-pitched sounds, as well as pitched chimes). Each touch or press allows the custom software to change and manipulate the probability tables that determine the performance for these percussive sounds, and this constantly changing set of controls presents a beguiling and somewhat frustrating experience to participants on this side of the installation. Currently, these sounds are generated from a sampler, but a future iteration will instead incorporate a robotic percussion instrument, allowing for an acoustic sound source for this side of the installation.

 When both stations of the installation are occupied, the actions of each participant directly cause changes in the environments on both sides of the wall. Engaging with the sensor box now distorts the video projection and creates drastic pitch modulations on the headband-side of the wall; wearing the headband switches the routing of the sensor box’s controls, resulting in the inclusion of more chime pitches and sending the percussive tone colors through a heavy granular delay. Eventually, audio from each side of the wall will blend into the other side’s channels, as if the wall between both sides is beginning to crack. 


Walking Distance

for open, acoustic instrumentation and audience-provided electronics 

Walking Distance is an interactive, musical installation piece that strives to recreate and reinterpret various sonic atmospheres that were captured across the town of Clarks Summit, Pennsylvania on a handful of specific dates. Audience members travel at their own leisure through a pre-set walking path that emulates the various streets, landmarks, and locales of the town and pass by “clusters” of performers stationed at various waypoints along the path.

While traveling the path, audience members with smartphones are encouraged to scan various QR codes that are placed at select intersections and waypoints. These QR codes link to streaming audio files that—when played back through the audience members’ phones—will add an additional sonic element to their experience of the installation and create unique textural and tonal combinations with the music created by whichever ensemble members they happen to be near.

Members of the performance ensemble are encouraged to change their location throughout the duration of the piece, allowing them the opportunity to perform and experience a multitude of musical and sonic material throughout the installation. These variables provide the ensemble with a piece that can be performed with an extremely flexible duration and instrumentation, and will never sound exactly the same across multiple performances.

The world premiere took place on April 4th, 2013 at the Lebanon Valley College in Annville, PA.

Download a copy of the performance materials (pathways set-up, musical score, etc.) by clicking here


Laying of Hands

Laying of Hands is an interactive installation piece centered on physical objects, images, and sonic landscapes that invoke a feeling of nostalgia, family, and history. While the piece is occurring, audience members file into the room and interact with photographs, letters, and tape recorders/players that are located on tables stationed around the parameter of the room. Microphones are placed near the surface of these tables to pick up the sounds of letters and photographs being shuffled, bent and flapped, while audio and mechanical noise from the tape players are routed through a small mixer. These ambient sounds are then processed by custom-made software running on a laptop.
While the audience members interact with the items placed on the tables, the melodica, electric guitar, and iPad (Thicket app) performers freely improvise—within predetermined tonal constructs—based off of phrases, pictures, and musical fragments located on the included set of Performance Cards.
This video was taken at the world premiere, which took place on March 21st, 2013 at the Hope Horn Gallery of the University of Scranton in Scranton, PA.


Experimental Instrument Building, Software Creation, & Free Improvisation

In addition to my experience as a composer and sound artist, I also have experience building and designing experimental acoustic and electronic instruments. Examples of my homemade instruments and controllers can be found in the picture gallery below.

As a performer, I specialize in free improvisation with both simple circuit synthesizers (developed by instrument makers such as John Mike Reed of bleeplabs.com and Chris Kucinski and Owen Osborn of critterandguitari.com), homemade software patches built in Max/MSP, and unconventional instrument preparations. The below video shows an example of one such performance.

On Friday December 13th, composer and winner of the 2013 UnCaged Toy Piano Festival, Anthony Marasco performed a series of new and improvised pieces with guitarist Michael Greinke. Drawing from unique sound sources such as homemade acoustic instruments, simple circuit-synthesizers and radical guitar preparations, the duo focuses on a communal approach to building sonic landscapes.

Prepared Trouble Board