• George Lewis
  • The Reincarnation of Blind Tom (2024)
    (Double Concerto for Human Soloist, AI-Pianist and Orchestra)

  • C.F. Peters Corporation (World)

Commissioned by SWR for Donaueschinger Musiktage 2024


Unavailable for performance.

  • soloist,pf + 3(I:pic).3.3(I:bcl).3(I:cbn[=contraforte])/4.3.3.1/timp.3perc/pf.hp/electronics/str
  • Human Soloist (any inst or voice), Computer-Improvising Piano
  • 20 min

Programme Note

The Reincarnation of Blind Tom (2024), double concerto for symphonic orchestra, improvising human soloist, and improvising AI-pianist.

Czech writer Karel Čapek’s play R.U.R. [Rossum's Universal Robots, 1920] imagines conflict between human capitalists and a new source of labor, the “robota,” a term that has come down to us in various languages as “robot.” Čapek’s model for the play, the robota system of Bohemian serfdom, endured for about four hundred years, and was similar to the later American “sharecropping.” While Čapek does not directly refer to US chattel slavery, his play presents the robotic equivalent of “house” slaves, robots who do the cooking and cleaning, and “field” slaves, robots who work the farms and factories.

On occasion, US slave musicians would somehow break out from these restrictions. Perhaps the most celebrated 19th century example was composer-pianist Thomas “Blind Tom” Wiggins. Born under slavery in 1849, Wiggins performed at the White House at the age of ten, and became one of the most famous American composer-pianists of his time - probably the first Black American composer to achieve that status, at a time when American art music was in its infancy. Blind Tom had a repertoire of over five thousand works, including Beethoven’s Sonata Pathetique, Bach fugues, and works by Chopin, Mendelssohn, Rossini, Verdi, Meyerbeer, and Liszt. He also performed his own compositions for piano, including his "imitations” of natural phenomena and machines. His most famous piece, The Battle Of Manassas, which he wrote at the age of 14 in 1863, uses notated piano clusters to evoke the sounds of bombs and battles, more than 50 years before Henry Cowell and the Futurists.

For both robots and slaves there is a denial of subjecthood and the capability for free expression. But critical theorist Fred Moten's important insight is that subjecthood can be heard. In R.U.R., the robots that could play music were considered more advanced, closest to being human.  As a slave, Blind Tom was a mere commodity, but as a musician, as Moten says, "If the commodity could speak, it would be imbued with a certain spirit."

In The Reincarnation of Blind Tom, Wiggins is metaphorically reincarnated as an AI - part of my ongoing exploration of what the decolonial might sound like, presenting new identities and histories for classical music - not so much to achieve “diversity,” but to foster a new complexity that promises far greater creative depth. This concerto features two soloists - soprano saxophonist Roscoe Mitchell, and Voyager, an interactive "virtual improvisor" program originally programmed by me and continually updated since that time with Damon Holzborn as primary collaborator. I have been generating music using algorithmic techniques since 1979, and Voyager is an outgrowth of Rainbow Family, for three networked computer systems and four human improvisors that premiered at IRCAM in 1984 - to my knowledge one of the first interactive AI works performed there. Voyager first appeared in 1987 as a concerto in which one or more human players interacted with a 64-voice multitimbral and microtonal “electronic virtual orchestra” of synthesized voices. In 2004, Voyager became an improvising pianist, performing on a computer-controlled acoustic piano, the Yamaha Disklavier. This version of Voyager made its Carnegie Hall debut (perhaps the first interactive computer pianist to do this) as soloist with the American Composers Orchestra in my work Virtual Concerto (2004). It is the latest version of this pianist-system that appears as a soloist in the present work.

Musical AI now offers a wide range of tools for generating music. However, programs like Rainbow Family and Voyager also depend on real-time recognition and classification of musical gestures during live performance, an area in which musical AI has developed relatively little. Voyager has its own recognition systems, but this pre-ML system had no classification system to speak of.  Thus, for the past two years I have been pursuing research at the Centre for Practice and Research in Science and Music (PRiSM) at the Royal Northern College of Music (RNCM), one of the premier sites for the development of musical AI. In November 2022 this research culminated in the first version of a musical gesture recognition (MGR) software that was used in my work Forager (2022) for quintet and a Voyager interactive computer pianist.

The version of Voyager used in Reincarnation uses a machine learning-based MGR developed by PRiSM composer-researcher Hongshuo Fan to analyzes the sounds of the orchestra and the saxophone soloist in real time, using that analysis to influence its responses to the musicians’ playing. The PRiSM MGR enables Voyager to recognize specific musical gestures played by the saxophonist and performed by the orchestra; additionally, Voyager creates independent behavior that arises from its own internal processes. The orchestra’s part of the proceedings is fully notated, serving as a non-improvised substrate for the work as a whole. The aim is to enable the human and computer musicians to communicate with each other using a lexicon that is co-created by the musicians, the computer, and the composer. Here, both the computer and the human soloists have the freedom to express and interpret the sonic behaviors active in the overall musical environment. 

Dominant current methodologies for machine learning in music depend crucially on imitation, typically of a corpus of behaviors. However, as I observed in an article published a quarter-century ago, notions about the nature and function of music inevitably become embedded in musical software; interactions with these systems reveal characteristics of the community of culture that produced them. Are algorithms that “learn” to produce variations on a corpus ultimately reproducing the cultural values embedded in that corpus? If so, how can we create new musical and cultural values from an existing corpus? And what might those values end up becoming? This work asks whether and how we can move beyond the corpus, beyond imitation.

This kind of music-making bears implications far beyond the aesthetic, not only raising questions regarding how we now experience artificial intelligence in the world, but also challenging our understanding of the human.

More Info