Charles Martin

SMCCLAB: Sound, Music and Creative Computing Lab

The Sound, Music and Creative Computing Lab is part of the School of Computing at the Australian National University.

The goal of the lab is to create new kinds of musical instruments that sense and understand music. These instruments will actively respond during performances to assist musicians.

Performing on touchscreens and percussion

We envision that musical instruments of the future will do more than react to musicians. They will predict their human player’s intentions and sense the current artistic context. Intelligent instruments will use this information to shape their sonic output. They might seamlessly add expression to sounds, update controller mappings, or even generate notes that the performer hasn’t played (yet!).

The idea here is not to put musicians out of work. We want to create tools that allow musicians to reach the highest levels of artistic expression, and that assist novice users in experiencing the excitement and flow of performance. Imagine an expert musician recording themselves on different instruments in their studio, and then performing a track with a live AI-generated ensemble, trained in their style. Think of a music student who can join their teachers in a jazz combo, learning how to follow the form of the song without worrying about playing wrong notes in their solo.

We think that combining music technology with AI and machine learning can lead to a plethora of new musical instruments. Our mission is to develop new intelligent instruments, perform with them, and bring them to a broad audience of musicians and performers. Along the way, we want to find out what intelligent instruments mean to musicians, to their music-making process, and what new music these tools can create!

Our work combines three cutting edge fields of research:

  • Expressive Musical Sensing: Understanding how music is played and what performers are doing. This involves hardware prototyping, creating new hyper-instruments, and applying cutting-edge sensors.
  • Musical Machine Learning: Creating and training predictive models of musical notes, sounds, and gestures. This includes applying techniques symbolic music generation, to understand scores and MIDI data, and music information retrieval to “hear” music in audio data.
  • Musical Human-Computer Interaction: Finding new ways for predictive models to work with musicians, and to analyse the musical experience that emerges.

Current Lab Members

(list of graduated students)

Lab Pages

Projects

Here’s some information about some musical AI projects we have worked on.

Intelligent Musical Prediction System (IMPS)

The Intelligent Musical Prediction System (IMPS) is a system for connecting musicians and interface developers with deep neural networks. IMPS connects with any musical interface or software using open sound control (OSC) and helps users to record a dataset, train a neural network and interact with it in real-time performance. See it in action in demo video below:

Physical Musical RNN

This project was to develop a physically encapsulated musical neural network. The box contains a Raspberry running a melody-generating recurrent neural network that continually composes music. You can adjust the sound, tempo, the ML-model used, and the “randomness” of the chosen samples to guide the music making process.

PhaseRings for ML-connected touchscreen ensemble

PhaseRings is a touchscreen instrument that works with an ML-connected ensemble. A server tracks the four performer’s improvisations and adjusts their user interface during the performance to give them access to different notes and sounds on their screens.

Self-playing, sensor-driven guitars

This installation of six self-playing, sensor-driven guitars was developed as part of collaborations at the University of Oslo’s RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. Each guitar uses a Bela embedded computer to generate sounds from a speaker driver attached to the guitar body. A distance sensor track the movement of listeners in the environment and the guitars use a firefly synchronisation algorithm to phase in and out of time.

Self-playing sensor-driven guitars

Embodied Predictive Musical Instrument (EMPI)

The EMPI is a minimal electronic musical instrument for experimenting with predictive interaction techniques. It includes a single physical input (a lever) and a matching physical output, built-in speaker, and a Raspberry Pi for sound synthesis and ML computations.