SMCCLAB: Sound, Music and Creative Computing Lab

The Sound, Music and Creative Computing Lab is part of the School of Computing at the Australian National University.

The goal of the lab is to create new kinds of musical instruments that sense and understand music. These instruments will actively respond during performances to assist musicians.

Performing on touchscreens and percussion

We envision that musical instruments of the future will do more than react to musicians. They will predict their human player’s intentions and sense the current artistic context. Intelligent instruments will use this information to shape their sonic output. They might seamlessly add expression to sounds, update controller mappings, or even generate notes that the performer hasn’t played (yet!).

The idea here is not to put musicians out of work. We want to create tools that allow musicians to reach the highest levels of artistic expression, and that assist novice users in experiencing the excitement and flow of performance. Imagine an expert musician recording themselves on different instruments in their studio, and then performing a track with a live AI-generated ensemble, trained in their style. Think of a music student who can join their teachers in a jazz combo, learning how to follow the form of the song without worrying about playing wrong notes in their solo.

We think that combining music technology with AI and machine learning can lead to a plethora of new musical instruments. Our mission is to develop new intelligent instruments, perform with them, and bring them to a broad audience of musicians and performers. Along the way, we want to find out what intelligent instruments mean to musicians, to their music-making process, and what new music these tools can create!

Our work combines three cutting edge fields of research:

  • Expressive Musical Sensing: Understanding how music is played and what performers are doing. This involves hardware prototyping, creating new hyper-instruments, and applying cutting-edge sensors.
  • Musical Machine Learning: Creating and training predictive models of musical notes, sounds, and gestures. This includes applying techniques symbolic music generation, to understand scores and MIDI data, and music information retrieval to “hear” music in audio data.
  • Musical Human-Computer Interaction: Finding new ways for predictive models to work with musicians, and to analyse the musical experience that emerges.

Current Lab Members

(list of lab alumni and former students)

Lab Pages

Join the SMCClab
How to learn about sound, music and creative computing in the lab. read more
Lab alumni and graduated students
Links to research from lab alumni. read more
PhD milestone expectations
A guide to completing research degree milestones in the SMCClab. read more
SMCClab projects on gesture, collaboration and intelligence
Focussing on four project areas in 2024 and beyond. read more
How to get started with research writing
Advice on writing your first research report or thesis. read more
How to present a student software project
Turning your code into a presentable artifact. read more

SMCClab Projects

Projects from Charles and other members of the lab.

A laptop and MIDI interface setup for an IMPS performance in a concert hall.
Intelligent Musical Prediction System
The Intelligent Musical Prediction System (IMPS) is a system for connecting musicians and interface developers with deep neural networks. IMPS connects with any musical interface or software using open sound control (OSC) and helps users to record a dataset, train a neural network and interact with it in real-time performance. video
The physical musical RNN, a black box with a small screen and five knobs.
Physical Musical RNN
This project was to develop a physically encapsulated musical neural network. The box contains a Raspberry running a melody-generating recurrent neural network that continually composes music. You can adjust the sound, tempo, the ML-model used, and the "randomness" of the chosen samples to guide the music making process. video
Musicians performing on ML-enhanced touchscreen instruments
PhaseRings for ML-connected touchscreen ensemble
PhaseRings is a touchscreen instrument that works with an ML-connected ensemble. A server tracks the four performer's improvisations and adjusts their user interface during the performance to give them access to different sounds on their screens. video
Self-playing sensor-driven guitars
Self-playing, sensor-driven guitars
This installation of six self-playing, sensor-driven guitars was developed at the RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. Each guitar uses a distance sensor track the movement of listeners in the environment and sounds from an embedded computer are played from a speaker driver attached to the guitar body. video
The EMPI, a small white box with a screen and two control arms.
Embodied Predictive Musical Instrument (EMPI)
The EMPI is a minimal electronic musical instrument for experimenting with predictive interaction techniques. It includes a single physical input (a lever) and a matching physical output, built-in speaker, and a Raspberry Pi for sound synthesis and ML computations. video