topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday December 13, 2024, 8:50 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: MGEN - Next generation music AI  (Read 10031 times)

rualark

  • Member
  • Joined in 2017
  • **
  • default avatar
  • Posts: 1
    • View Profile
    • Donate to Member
MGEN - Next generation music AI
« on: May 09, 2017, 01:21 PM »
Hi! I am glad to make a first post on this forum.

I started a new desktop C++ project, which is aimed at evolution of automated/aided music composition and virtual instruments performance, which (I believe) is a path to the new age in the music world.

Main ideas:
- Create different music composition / analysis / advise algorithms - currently three algorithms are working
- Automatically adapt midi files or generated notes for live playback with best virtual instruments - currently adaptation for piano, Samplemodeling Brass and Embertone Friedlander Violin is working
- Develop the framework (visualization, playback to DAW...) - currently main functions are working

Video introduction (not all features are shown):

How currently automatic adaptation works:


Project url: https://github.com/rualark/MGen

If you are interested, the project welcomes any type of contributions:
- discussing general ideas
- participating in development of framework or composition (generation) algorithms or virtual instrument adaptation algorithms
- bug reports
- feature requests
- testing

If you know who can be interested in such a project, please help them find it.

Details on automatic music adaptation for virtual instruments
One of the project's goals is to develop robust and expressive algorithms that can automatically adapt music score to a virtual instrument.

This approach can be used for several interesting purposes:

1. Composer will be able to automatically process the score and immediately listen to an approximation of a performance that is closer to best capabilities of a virtual instrument. Composer will not have to manually draw all CCs, choose legato transition types, move notes, randomize, etc. Composer will not have to play the piece live using breath or wind controller to get a first demo. After that composer can tune some parts manually or record them.

P.S. I think, playing using wind or breath controllers or manual tuning of midi information will give a better result in general (like live performance is generally better than a virtual instrument), but this will take incomparably more time and resources than just running an adaptation algorithm.

Here are example of this usage for Samplemodeling Trumpet here with details on adaptation algorithms that I have already implemented: https://www.youtube..../watch?v=91ePSlmW-gQ

2. Music generators now are able to play directly to virtual instruments without thinking about how to present information correctly to virtual instrument so that it sounds realistically. MGen algorithm is near-realtime, which means that generated music is almost instantly adapted and played by virtual instrument.

This is already implemented in MGen

3. In the future this approach can be used for real-time realistic playback of music directly from Sibelius or Finale. This means that composers will be able to listen to much more realistic sound without messing with piano rolls, advanced tuning and without going outside the notation software. This needs some additional ideas to be implemented, but I think this is possible.

So far adaptation algorithms for all Samplemodeling Brass instruments is implemented (most detailed for SM Trumpet).

I created a list of approaches and implemented about half of approaches, that I considered most effective, but definitely there is a lot of room for further improvement.

Currently algorithm is implemented as a standalone Windows C++ program.
All cc and keyswitches are automatically generated from scratch.
Current version of algorithm works like this:

- Import midi file (cc are not imported - they are generated using information from note velocities and note on/note off events) - it takes about 1 second for medium sized files
- Adapt: Create cc, decide how fast legato transitions to use, generate legato velocities, move notes, add keyswitches to all tracks (it takes about 1 more second)
- Start playback to daw instantly after adaptation is finished using virtual midi driver.
- You can record it in daw or just listen

I think this algorithm can be ported to some other types of programs when it is finished.

Why only note velocities and note on/note off events are used as input to the algorithm?
Answer: this is the way midi file is usually exported from notation software (Sibelius, Finale etc.)
Composer exports midi file from notation software, imports into mgen and listens to/records the result.

Talking about further development of adaptation for virtual instruments I see the following interesting paths:

- Do more testing of implemented algorithms and make them appropriate for more types of music. Tune algorithm parameters.
- Implement new algorithms. Support more virtual instruments.
- Export adapted midi file (this is easy)
- Porting software to Mac and Linux also (much work)
- Import musicxml instead of midifile - and use additional data to adapt score: hairpins (in midi files they are discreted to velocities), expression marks, articulation and technique marks (much work)
- Support playback directly from notation program: composer does not need to export to listen to result of his work (much work)

I believe that current prototype of MGen does some of the consuming work of adapting music score to virtual instruments. Other jobs (select articulations and techniques) can be done manually and usually does not take as much time as drawing or recording cc, moving transitions, adding transition keyswitches and randomizing, which is done by mgen. But of course adaptation is not tuned of all possible types of music yet.


mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,914
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Re: MGEN - Next generation music AI
« Reply #1 on: May 09, 2017, 01:22 PM »
Welcome to the site Alexey!
Glad to see you posting about your very cool project.  :Thmbsup:

skwire

  • Global Moderator
  • Joined in 2005
  • *****
  • Posts: 5,287
    • View Profile
    • Donate to Member
Re: MGEN - Next generation music AI
« Reply #2 on: May 09, 2017, 01:43 PM »
Wow...that's slick.  Well done!