SMCCLAB: Sound, Music and Creative Computing Lab
The Sound, Music and Creative Computing Lab is part of the School of Computing at the Australian National University.
The goal of the lab is to create new kinds of musical instruments that sense and understand music. These instruments will actively respond during performances to assist musicians.
We envision that musical instruments of the future will do more than react to musicians. They will predict their human player’s intentions and sense the current artistic context. Intelligent instruments will use this information to shape their sonic output. They might seamlessly add expression to sounds, update controller mappings, or even generate notes that the performer hasn’t played (yet!).
The idea here is not to put musicians out of work. We want to create tools that allow musicians to reach the highest levels of artistic expression, and that assist novice users in experiencing the excitement and flow of performance. Imagine an expert musician recording themselves on different instruments in their studio, and then performing a track with a live AI-generated ensemble, trained in their style. Think of a music student who can join their teachers in a jazz combo, learning how to follow the form of the song without worrying about playing wrong notes in their solo.
We think that combining music technology with AI and machine learning can lead to a plethora of new musical instruments. Our mission is to develop new intelligent instruments, perform with them, and bring them to a broad audience of musicians and performers. Along the way, we want to find out what intelligent instruments mean to musicians, to their music-making process, and what new music these tools can create!
Our work combines three cutting edge fields of research:
- Expressive Musical Sensing: Understanding how music is played and what performers are doing. This involves hardware prototyping, creating new hyper-instruments, and applying cutting-edge sensors.
- Musical Machine Learning: Creating and training predictive models of musical notes, sounds, and gestures. This includes applying techniques symbolic music generation, to understand scores and MIDI data, and music information retrieval to “hear” music in audio data.
- Musical Human-Computer Interaction: Finding new ways for predictive models to work with musicians, and to analyse the musical experience that emerges.
Current Lab Members
-
Minsik Choi (PhD Researcher)
-
Yichen Wang (PhD Researcher)
-
Xinlei Niu (PhD Researcher)
-
Brent Schuetze (Research Assistant)
-
Benedikte Wallace (PhD Researcher at University of Oslo)
(list of lab alumni and former students)
Lab Pages
Join the SMCClab
How to learn about sound, music and creative computing in the lab. read moreLab alumni and graduated students
Links to research from lab alumni. read morePhD milestone expectations
A guide to completing research degree milestones in the SMCClab. read moreSMCClab projects on gesture, collaboration and intelligence
Focussing on four project areas in 2024 and beyond. read moreHow to get started with research writing
Advice on writing your first research report or thesis. read moreHow to present a student software project
Turning your code into a presentable artifact. read moreSMCClab Projects
Projects from Charles and other members of the lab.