ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Other Software > Developer's Corner

MGEN - Next generation music AI


Hi! I am glad to make a first post on this forum.

I started a new desktop C++ project, which is aimed at evolution of automated/aided music composition and virtual instruments performance, which (I believe) is a path to the new age in the music world.

Main ideas:
- Create different music composition / analysis / advise algorithms - currently three algorithms are working
- Automatically adapt midi files or generated notes for live playback with best virtual instruments - currently adaptation for piano, Samplemodeling Brass and Embertone Friedlander Violin is working
- Develop the framework (visualization, playback to DAW...) - currently main functions are working

Video introduction (not all features are shown):

How currently automatic adaptation works:

Project url:

If you are interested, the project welcomes any type of contributions:
- discussing general ideas
- participating in development of framework or composition (generation) algorithms or virtual instrument adaptation algorithms
- bug reports
- feature requests
- testing

If you know who can be interested in such a project, please help them find it.

Details on automatic music adaptation for virtual instrumentsOne of the project's goals is to develop robust and expressive algorithms that can automatically adapt music score to a virtual instrument.

This approach can be used for several interesting purposes:

1. Composer will be able to automatically process the score and immediately listen to an approximation of a performance that is closer to best capabilities of a virtual instrument. Composer will not have to manually draw all CCs, choose legato transition types, move notes, randomize, etc. Composer will not have to play the piece live using breath or wind controller to get a first demo. After that composer can tune some parts manually or record them.

P.S. I think, playing using wind or breath controllers or manual tuning of midi information will give a better result in general (like live performance is generally better than a virtual instrument), but this will take incomparably more time and resources than just running an adaptation algorithm.

Here are example of this usage for Samplemodeling Trumpet here with details on adaptation algorithms that I have already implemented:

2. Music generators now are able to play directly to virtual instruments without thinking about how to present information correctly to virtual instrument so that it sounds realistically. MGen algorithm is near-realtime, which means that generated music is almost instantly adapted and played by virtual instrument.

This is already implemented in MGen

3. In the future this approach can be used for real-time realistic playback of music directly from Sibelius or Finale. This means that composers will be able to listen to much more realistic sound without messing with piano rolls, advanced tuning and without going outside the notation software. This needs some additional ideas to be implemented, but I think this is possible.

So far adaptation algorithms for all Samplemodeling Brass instruments is implemented (most detailed for SM Trumpet).

I created a list of approaches and implemented about half of approaches, that I considered most effective, but definitely there is a lot of room for further improvement.

Currently algorithm is implemented as a standalone Windows C++ program.
All cc and keyswitches are automatically generated from scratch.
Current version of algorithm works like this:

- Import midi file (cc are not imported - they are generated using information from note velocities and note on/note off events) - it takes about 1 second for medium sized files
- Adapt: Create cc, decide how fast legato transitions to use, generate legato velocities, move notes, add keyswitches to all tracks (it takes about 1 more second)
- Start playback to daw instantly after adaptation is finished using virtual midi driver.
- You can record it in daw or just listen

I think this algorithm can be ported to some other types of programs when it is finished.

Why only note velocities and note on/note off events are used as input to the algorithm?
Answer: this is the way midi file is usually exported from notation software (Sibelius, Finale etc.)
Composer exports midi file from notation software, imports into mgen and listens to/records the result.

Talking about further development of adaptation for virtual instruments I see the following interesting paths:

- Do more testing of implemented algorithms and make them appropriate for more types of music. Tune algorithm parameters.
- Implement new algorithms. Support more virtual instruments.
- Export adapted midi file (this is easy)
- Porting software to Mac and Linux also (much work)
- Import musicxml instead of midifile - and use additional data to adapt score: hairpins (in midi files they are discreted to velocities), expression marks, articulation and technique marks (much work)
- Support playback directly from notation program: composer does not need to export to listen to result of his work (much work)

I believe that current prototype of MGen does some of the consuming work of adapting music score to virtual instruments. Other jobs (select articulations and techniques) can be done manually and usually does not take as much time as drawing or recording cc, moving transitions, adding transition keyswitches and randomizing, which is done by mgen. But of course adaptation is not tuned of all possible types of music yet.

Welcome to the site Alexey!
Glad to see you posting about your very cool project.  :Thmbsup:

Wow...that's slick.  Well done!


[0] Message Index

Go to full version