Mission of the Inspiart AI team
Hello everyone. How are you? It is Hyon Kim, an R&D engineer of Inspiart project.
In my last blog, I talked about the AI team of Inspiart. I would like to introduce a team mission of the Inspiart project today.
As a mission of the AI team, we are working on “Automation of Process of Music Production”.
Do you know how one song or CD album is created and released? For example, when recording music of a band, each part (vocal, guitar, bass, drum) is recorded separately. In this process, music engineers are very careful on choosing the combination of microphones, instruments, amplifiers, etc. mainly to record.
After recording each part, a process called mixing is introduced. Here we apply levels and loudness adjustments, effects to each sound source and manipulate sounds in detail. The level, loudness adjustment, and how to apply effects are the highlights of the mixing engineer’s skill sets. Effects can be divided into reverb, compressor, equalizer, and panning. Reverb takes care of vibrancy, and compressor enhances and limits the sound. By an equalizer, the balance of the frequency band is adjusted, and in the process of panning, the instrument positions in sound and the feeling of the sound field are manipulated and adjusted.
I introduced the effects very briefly and roughly, but the story of the effects is really deep and profound. I would explain the details of each effect in my blog in near future. In this process, how we want the music to sound is realized and come true clearly. Based on human sense, music can sound warmth, crispy and get vocal stand out and such after the process. In addition to sound level and volume manipulation, the process also gets into the frequency band level to adjust the sound. The bass and drums that form the basis of the music gather in the low frequency band, and the accompaniment melody and vocals tend to gather in the middle and high frequency bands. You can control the sound source for each part, and operate the sound source across the frequency band while looking at the frequency band, and you will have a master track with all the parts in one.
At the end of the music production, mastering process is taken place. Mastering enables you to adjust the overall volume across the master tracks created by mixing, and to adjust overall feeling of the music. This process is also deep and profound. People’s senses talk. A complete version of the song that you cut a gramophone record is created at the end of this process. By burning the master gramophone record to a CD, it is the completion of creating a single album as CD.
These processes are expensive for the most of amateur and indie musicians. Wouldn’t it be great if there were software that could automate these processes? Achieving this is the core mission of the Inspiart AI team.
Furthermore, machine learning techniques are applied to develop a denoising function that removes noise from casually recorded sound sources. Even if musicians cannot go to the recording studio it would be great to record music cleanly in a room, right? Everything is based on the vision of providing an environment in which music lovers can freely create their music and publish them. We are still in the middle of the way trying various methods and moving forward one step by step.
The team is expanding and we are hiring, so if you are interested please contact us from the recruitment page .
Graduated from Waseda University with Bachelors degree, majored in Applied Mathematics.Graduated from National University of Singapore with Master’s degree majored in Mathematics. After completing his studies, he worked as an engineer and developed modules of sensor fusion, outlier detection and object detection for autonomous vehicles in Singapore. On returning to Japan, he worked for projects regarding speech recognition and anomaly sound detection. His current projects include music generation and sound source adjustment.