Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for linked modulators #505

Closed
wants to merge 202 commits into from
Closed

Add support for linked modulators #505

wants to merge 202 commits into from

Conversation

derselbst
Copy link
Member

@derselbst derselbst commented Feb 1, 2019

Implements and closes #497.

TODO:

  • set up unit tests
  • code duplication fluid_defpreset_noteon_add_mod_to_voice fluid_defpreset_noteon_add_linked_mod_to_voice
  • make format

jjceresa added 24 commits January 1, 2019 12:11
Adding new modulator enum and macros:
- add FLUID_MOD_LINK_SRC src1 in enum fluid_mod_src.
- add macros FLUID_SFMOD_LINK_DEST FLUID_MOD_LINK_DEST.
-import bit link of destination field.
-check FLUID_MOD_LINK_SRC source.
-add fluid_mod_has_linked_src1() function.
-update fluid_dump_modulator().
1)Separate modulators checking and modulators removing.
 This is a requirement before reading linked modulators to
 keep destination index intact. This leads to new functions:
 -fluid_zone_check_linked_mod() to check all modulators.
 -fluid_zone_check_remove_mod() to remove invalid modulators.
 Later, getting linked modulators will be incorporated between
 these functions.
2)return status FLUID_OK from fluid_zone_check_mod()
- incorporates full linked modulators checking inside fluid_zone_check_linked_mod().
  This is a requirement before reading linked modulators to insure valid
  modulators at synthesis time.
- status returned by fluid_zone_check_linked_mod() is completed.
- testing status in fluid_zone_check_mod().
- add new modulators list linked_mod in zones (preset/instrument, global/local).
  For performance reason at synthesis time, linked modulators will be extracted from modulators
  list mod to this separate list linked_mod.
- initialize linked_mod to null in new_fluid_preset_zone(), new_fluid_inst_zone().
- free linked_mod in delete_fluid_preset_zone(),delete_fluid_inst_zone().
- add linked_mod parameter to fluid_zone_check_mod().
- add linked_mod parameter to fluid_zone_mod_import_sfont().
- complete comments for fluid_zone_check_mod().
-adding new function fluid_zone_copy_linked_mod()
-calling this function in fluid_zone_check_mod().
-removing invalid linked modulators in fluid_zone_check_remove_mod().
-adding new function fluid_get_num_mod().
-limiting size of linked modulators list.
- Adding link input field.
- Reading link input in fluid_mod_get_value().
- Add linked modulator in fluid_voice_add_mod_local(), mode FLUID_VOICE_DEFAULT.
  The same code is used to add an unlinked modulator or a complex linked
  modulator in the voice.
This initializes modulators link input and generators mod input at noteon time.
Taking account of linked modulators inside fluid_voice_get_lower_boundary_for_attenuation() at
noteon time.
Taking account of linked modulators during modulation on CC change.
This function allows to get consecutive complex linked modulator from a list.
This function is used to test complex modulators identity.
This adds preset linked modulators to voice.
- Add linked modulator in fluid_voice_add_mod_local(), mode FLUID_VOICE_ADD.
  Specific code is used to add  or a complex linked modulators in the voice.
Filter parameters allow to display modulators of the instrument zone and preset zone
corresponding to filter names.
-Minor change in string displayed by fluid_print_voice_mod().
-preset zone name is prefixed by pz:
-instrument zone name is prefixed by iz:
 instrument name should not integrate preset zone name.
- Adding instrument linked modulators to voice by calling
  fluid_defpreset_noteon_add_linked_mod_to_voice() function at noteon time.
@derselbst derselbst added this to the 2.1 milestone Feb 1, 2019
@derselbst
Copy link
Member Author

derselbst commented Oct 16, 2019

There is a code duplication of fluid_defpreset_noteon_add_mod_to_voice and fluid_defpreset_noteon_add_linked_mod_to_voice. The only real difference I can see is the call to fluid_mod_test_linked_identity(). The other differences are removings that are just costmetic issues, see below.

static void
fluid_defpreset_noteon_add_mod_to_voice(fluid_voice_t *voice,
                                        fluid_mod_t *global_mod, fluid_mod_t *local_mod,
                                        int mode)
{
    fluid_mod_t *mod;
    /* list for 'sorting' global/local modulators */
    fluid_mod_t *mod_list[FLUID_NUM_MOD];
    int mod_list_count, i;

    /* identity_limit_count is the modulator upper limit number to handle with
     * existing identical modulators.
     * When identity_limit_count is below the actual number of modulators, this
     * will restrict identity check to this upper limit,
     * This is useful when we know by advance that there is no duplicate with
     * modulators at index above this limit. This avoid wasting cpu cycles at
     * noteon.
     */
    int identity_limit_count;

    /* Step 1: Local modulators replace identic global modulators. */

    /* local (instrument zone/preset zone), modulators: Put them all into a list. */
    mod_list_count = 0;

    while(local_mod)
    {
        /* As modulators number in local_mod list was limited to FLUID_NUM_MOD at
           soundfont loading time (fluid_limit_mod_list()), here we don't need
           to check if mod_list is full.
         */
        mod_list[mod_list_count++] = local_mod;
-        local_mod = local_mod->next;
+        local_mod = fluid_mod_get_next(local_mod);
    }

    /* global (instrument zone/preset zone), modulators.
     * Replace modulators with the same definition in the global list:
     * (Instrument zone: SF 2.01 page 69, 'bullet' 8)
     * (Preset zone:     SF 2.01 page 69, second-last bullet).
     *
     * mod_list contains local modulators. Now we know that there
     * is no global modulator identic to another global modulator (this has
     * been checked at soundfont loading time). So global modulators
     * are only checked against local modulators number.
     */

    /* Restrict identity check to the number of local modulators */
    identity_limit_count = mod_list_count;

    while(global_mod)
    {
        /* 'Identical' global modulators are ignored.
         *  SF2.01 section 9.5.1
         *  page 69, 'bullet' 3 defines 'identical'.  */

        for(i = 0; i < identity_limit_count; i++)
        {
-            if(fluid_mod_test_identity(global_mod, mod_list[i]))
+            if(fluid_mod_test_linked_identity(global_mod, mod_list[i], FLUID_LINKED_MOD_TEST_ONLY))
            {
                break;
            }
        }

        /* Finally add the new modulator to the list. */
        if(i >= identity_limit_count)
        {
            /* Although local_mod and global_mod lists was limited to
               FLUID_NUM_MOD at soundfont loading time, it is possible that
               local + global modulators exceeds FLUID_NUM_MOD.
               So, checks if mod_list_count reachs the limit.
            */
-            if(mod_list_count >= FLUID_NUM_MOD)
-            {
                /* mod_list is full, we silently forget this modulator and
                   next global modulators. When mod_list will be added to the
                   voice, a warning will be displayed if the voice list is full.
                   (see fluid_voice_add_mod_local()).
                */
-                break;
-            }

            mod_list[mod_list_count++] = global_mod;
        }

-        global_mod = global_mod->next;
+        global_mod = fluid_mod_get_next(global_mod);
    }

    /* Step 2: global + local modulators are added to the voice using mode. */

    /*
     * mod_list contains local and global modulators, we know that:
     * - there is no global modulator identic to another global modulator,
     * - there is no local modulator identic to another local modulator,
     * So these local/global modulators are only checked against
     * actual number of voice modulators.
     */

    /* Restrict identity check to the actual number of voice modulators */
    /* Acual number of voice modulators : defaults + [instruments] */
    identity_limit_count = voice->mod_count;

    for(i = 0; i < mod_list_count; i++)
    {

        mod = mod_list[i];
        /* in mode FLUID_VOICE_OVERWRITE disabled instruments modulators CANNOT be skipped. */
        /* in mode FLUID_VOICE_ADD disabled preset modulators can be skipped. */

-        if((mode == FLUID_VOICE_OVERWRITE) || (mod->amount != 0))
        {
            /* Instrument modulators -supersede- existing (default) modulators.
               SF 2.01 page 69, 'bullet' 6 */

            /* Preset modulators -add- to existing instrument modulators.
               SF2.01 page 70 first bullet on page */
            fluid_voice_add_mod_local(voice, mod, mode, identity_limit_count);
        }
    }
}

@jjceresa
Copy link
Collaborator

There is a code duplication of fluid_defpreset_noteon_add_mod_to_voice and fluid_defpreset_noteon_add_linked_mod_to_voice.

Yes, I am aware of this duplication and will review this.

@mawe42
Copy link
Member

mawe42 commented Oct 17, 2019

While compiling this branch I noticed that gcc complains about a possible uninitialized variable (only with release type RelWithDebInfo):

fluidsynth/src/synth/fluid_mod.c: In function ‘fluid_mod_copy_linked_mod’:
fluidsynth/src/synth/fluid_mod.c:1573:36: warning: ‘mod_cpy’ may be used uninitialized in this function [-Wmaybe-uninitialized]
                     last_mod->next = mod_cpy;
                     ~~~~~~~~~~~~~~~^~~~~~~~~

@mawe42
Copy link
Member

mawe42 commented Oct 17, 2019

Thanks a lot to both of you for working on this, I think it will add some great new expressive capabilities to Fluidsynth!

I'm now trying to get my head around these linked modulators and to understand what we could do with them musically. I'll try to explain it to myself here... maybe you can tell me if I got it right or wrong :-)

The Soundfont spec only has a very short section on linked modulators, but there is also a picture that explains the concept (shown here along with the depiction of the modulator structure on the right):
sf24spec-mod-chaining

So for the example in the picture on the left, for both mod 1 and mod 2 the output value is calculated by taking both inputs (src1 and src2) in native units, using the normalisation to convert them into the [-1;1] range, multiply them with the "amount" in destination units, passing them through the output transform (which we don't currently support) and then on to the summing node. The resulting value is then used as input to the normalisation stage of mod 3.

As mod 1 and mod 2 do not target a generator, they don't have direct destination units. But I guess it makes sense that all modulators in the graph share the same destination units: the units of the generator that the final modulator targets.

Now to the thing that tripped me up: I think the current implementation skips the input value normalisation stage for values coming from linked modulators. This is problematic because:

  • we loose expressiveness because the final modulator (mod 3) can not make use of input mapping curves (unipolar positive, bipolar negative etc)
  • the values coming from the linked modulators are already multiplied with the amount in destination units, meaning they might be outside of the normalised [-1;1] range.

If that really is the case and I haven't read the code wrong, then I would propose that we change the behaviour so that the input normalisation for values coming from linked modulators is not skipped. That means we have to have some way of knowing the min/max values to map them to the [-1;1] space. For that I see two possible solutions:

  1. Limit the "amount" field of modulators that target another modulator to the [-1;1] range
  2. Use the min/max range of the destination generator

Solution 2. seems to be more intuitive, especially when I put my Soundfont designer hat on. But I'm not sure if there are natural min/max ranges for all destination generators. The linked modulator concept as described in the Soundfont spec seems like it hasn't been fully thought through... or maybe I simply didn't understand it properly.

Sorry, very long post. But it's a really interesting but quite complicated beast :-)

@mawe42
Copy link
Member

mawe42 commented Oct 17, 2019

One more comment: unless I missed something in the spec, then linked modulators are a little "under specified". Meaning that there is no single correct way to implement linked modulators. So what we're adding now is the "fluidsynth way", but there might be other and different implementations as well. Not necessarily a bad thing, I don't think there are other SF synth with support for linked modulators out there so we could be the "leader of the pack" here :-) But something we should be aware of when announcing this feature, I think.

@derselbst
Copy link
Member Author

I think the current implementation skips the input value normalisation stage for values coming from linked modulators.

You are correct:

if(fluid_mod_has_linked_src1(mod))
{
/* src1 link source isn't mapped (i.e transformed) */
v1 = mod->link;
}

The reason is, like you already said, that we don't have min/max ranges to perform normalization.

Just to make it clear: You are now suggesting to pull that absolute non-normalized src1 input through all linked modulators until we come to the very last modulator which is linked to a destination generator, and then performing normalization according to the min/max of section 8.1.3 Generator Summary ? In other words: What if modulator3 in the example had a descendant linked modulator4? How to treat the src1 input received by modulator3?

@mawe42
Copy link
Member

mawe42 commented Oct 18, 2019

Just to make it clear: You are now suggesting to pull that absolute non-normalized src1 input through all linked modulators until we come to the very last modulator which is linked to a destination generator, and then performing normalization according to the min/max of section 8.1.3 Generator Summary ?

No quite... my proposal 2. meant that we keep the implementation as suggested in the picture. In other words, that each modulator runs through all stages, including output multiplication with "amount" in destinaton units and input normalisation back to [-1,1]. And that the min/max ranges for all input normalisation is taken from the generator that is the target of the whole graph.

But like I said, I'm not sure that is even possible for all generators. For example, what max value would we use for the SampleOffset generators? Maybe it would be possible to analyse the linked modulators to figure out their min/max values programatically from the configuration of the modulator?

The other option would be to skip the input normalisation stage (maybe really just normalisation but keep the mapping curves) for all modulators in the graph that take input from other modulators. But that would also require that we ignore the "amount" field for all but the last modulator, or force the amount to be in the [-1;1] range. In other words, the destination units for linked modulators would be [-1,1].

@derselbst
Copy link
Member Author

And that the min/max ranges for all input normalisation is taken from the generator that is the target of the whole graph.

Min/max are usually defined by the source controllers. Doing it now the other way around just because we are dealing with linked modulators feels very unnatural and artificial to me.

For example, what max value would we use for the SampleOffset generators?

Defined by the sample itself. And I'm afraid introducing a unique fluid_mod_t to fluid_voice_t to fluid_sample_t dependency is really challenging.

Maybe it would be possible to analyse the linked modulators to figure out their min/max values programatically from the configuration of the modulator?

We could do anything, but after all, it must still be comprehensible to the soundfont designer how that complex modulator behaves. And walking thourgh all members to find out min/max (which I believe the designer doesn't even know himself) would be contraproductive, I believe.


I think what we need to make design decisions here are one or two real-world examples of how complex modulators can be used. Something useful that cannot be achieved with simple modulators.

@mawe42
Copy link
Member

mawe42 commented Oct 18, 2019

Min/max are usually defined by the source controllers. Doing it now the other way around just because we are dealing with linked modulators feels very unnatural and artificial to me.

Yes, it is definitely weird. But the whole linked modulators thing is weird from the start and doesn't really fit the rest of the modulator system design.

Maybe the most straight forward and easiest to understand approach would be to state that "destination units" of modulators that target another modulator are in the [-1,1] range. And have linked modulators apply the input mapping curves but leave the range untouched (just limit the values to [-1,1] to be on the safe side.

I think what we need to make design decisions here are one or two real-world examples of how complex modulators can be used. Something useful that cannot be achieved with simple modulators.

Yes, that would be good. I'm still struggling to come up with a good use-case.

@mawe42
Copy link
Member

mawe42 commented Oct 20, 2019

Yes, that would be good. I'm still struggling to come up with a good use-case.

Thinking more about this, I think the way linked modulators are (under-)specified means there really is nothing they add to the expressiveness of a Soundfont synth. The only thing they add is what JJC has already mentioned: you could create multiple modulators that target the same generator and switch them all on/off with a secondary source on the final modulator. But as he also said: you can also do that without linked modulators. It's just a few more clicks.

So to be honest... I'm not convinced that that is an important enough use case to justify the added code complexity and possible performance impact that this feature might add.

But maybe @jjceresa can think of something that I've missed that might justify it anyway?

@derselbst
Copy link
Member Author

So to be honest... I'm not convinced that that is an important enough use case to justify the added code complexity and possible performance impact that this feature might add.

Given all the new questions that arose while working on this, I agree. I think would make sense to unassign it from version 2.1 . If need be we can continue work at any time and eventually ship it for a later release.

@jjceresa
Copy link
Collaborator

we loose expressiveness because the final modulator (mod 3) can not make use of input mapping curves (unipolar positive, bipolar negative etc)...

Not really. Expressiveness comes from CCs sources connected to mod1 and mod2 with their respective mapping curves. This expressiveness is still present at mod3's link input and resulting value (at link input) shouldn't be mapped again because this new mapping could break the one done before. It would be quickly impossible for a designer to predict what could be the result of : mapx(...map3(map2((map1(CCs sources)))). (As already said all must still comprehensive for the soundfont designer). As any data at the output of a modulator (i.e mod1, mod2) is expressed in destination units (i.e the one of final generator), resulting value at link input should be still considered in destination unit. In fact mod3's link input node behaves similarly that the summing input node of this generator.
That means that mod1 and mod2 destination could be directly connected to the final generator instead passing through another modulator.

That said "when could we need to pass the output of a well designed modulator mx through another modulator my before reaching the ending generator ?". The first answer could be "each time we need to control the effect of this modulator mx to the generator without impacting the modulator mx itself".

For example, this case occurs at performance time when the musician want to select a "sound articulation" among others, on the fly while he is playing (without express need to select another preset). (These "sound articulations" will be called "effects" here).
Let a set of "effect" {e1,e2} for the current preset selected.
Each "effect" is represented by one ore more modulators but for simplicity we assume here there is only one modulator per effect (i.e m1 for e1 and m2 or e2) .
Each effect is controlled on input by the musician by a set of foot CC. For simplicity we assume that each effect is controlled by only one foot CC (on mod's src1) and a rotary knob (on mod's src2). The goal of the rotary knob being to set a variation of the effect before starting playing and this setting is independent of the foot CC.

The musician have a palette of 2 effects (e1,e2) and one foot CC and want to be able to play one effect at a time with the same foot CC. While the musician plays with this hands he can also use the foot CC that plays the current selected effect (e.g e1). While continuing playing with one hand, he use the other hand to select another effect (e2), effect e1 will be substituted by effect e2 (i.e m1 is disabled and m2 is enabled).

Effect selection logic can use a linked modulator me1 inserted between m1 output and the relevant generator. Then, another linked modulator me2 is inserted between m2 output and the relevant generator. Then me1s src2 and me2 src2 inputs are feed in opposite direction by the appropriate selection CC.

This example shows that m1 and m2 modulator need both controlled without modification of any respective sources (src1,src2). Of course only destination field need to be changed at design time.

So to be honest... I'm not convinced that that is an important enough use case to justify the added code complexity and possible performance impact that this feature might add.

No problem. Please, if you try this branch on your ARM machine let me know about any possible performance impact.

Given all the new questions that arose while working on this, I agree. I think would make sense to unassign it from version 2.1 . If need be we can continue work at any time and eventually ship it for a later release.

I am fine with this choice.

@jjceresa
Copy link
Collaborator

While compiling this branch I noticed that gcc complains about a possible uninitialized variable (only with release type RelWithDebInfo):
fluidsynth/src/synth/fluid_mod.c: In function ‘fluid_mod_copy_linked_mod’:
fluidsynth/src/synth/fluid_mod.c:1573:36: warning: ‘mod_cpy’ may be used uninitialized

This is just a warning, the variable is really initialized.

@derselbst derselbst removed this from the 2.1 milestone Oct 27, 2019
- Rename variables.
- Make no voice modulator displaying by default.
@mawe42
Copy link
Member

mawe42 commented Nov 1, 2019

Sorry, very late reply...

As already said all must still comprehensive for the soundfont designer

Yes, and that is probably my main concern with linked modulators: they are not well defined in the spec. So Soundfont designers will effectively create a "Fluidsynth Version" of their Soundfont that targets the way we implement linked modulators and probably only sounds correct in Fluidsynth. But maybe that doesn't matter as we are the only synth with actual support for linked modulators. In any case, if we decide to actually release this feature, we need to document our way of supporting linked modulators (because reading the SF2 spec is not helpful).

For example, this case occurs at performance time when the musician want to select a "sound articulation" among others, on the fly while he is playing (without express need to select another preset).

That is a good example of a possible use-case, thanks! And for that special case, linked modulators are one possible solution. But my concern is exactly that: that it's just one of the possible solutions. Other solutions would be to use a hardware MIDI controller with "setup banks", like many controllers have. So you can configure, store and switch quickly which pedal or input sends which MIDI messages. Or you could use a software based MIDI router to filter and modify MIDI messages from your MIDI controller. Or you could implement different presets with different modulator sources in the Soundfont.

It all depends on the actual real-world use-case, I guess. And I might be wrong here, but your given use-case sounds theoretical, not driven by a real-world need that you or others have. So effectively we are implementing a solution that is looking for a problem it can solve. And a solution that is not well specified in the Soundfont spec.

Please don't get me wrong: I am really grateful for all the work you are putting into this feature! And I am strongly for extending and improving Fluidsynths expressiveness. And generally I'm in favour of linked modulators... I just don't know how to overcome the problem that the specification is unclear and the fact that we seem to have no real-world application for this feature.

@jjceresa
Copy link
Collaborator

jjceresa commented Nov 1, 2019

my main concern with linked modulators: they are not well defined in the spec.

They are simply defined as a way to route modulator m1's output to the summing link input of another modulator m2, instead of routing m1's output to the summing input of a generator.

In any case, if we decide to actually release this feature, we need to document our way of supporting linked modulators (because reading the SF2 spec is not helpful).

yes, the way linked modulators are supported need to be documented.

Other solutions would be to use a hardware MIDI controller with "setup banks", like many controllers have. So you can configure, store and switch quickly which pedal or input sends which MIDI messages. Or you could use a software based MIDI router to filter and modify MIDI messages from your MIDI controller.

I see all this things well suited only for hardware configurations making MIDI controllers (Keyboard, CC pedal...) independent of the synthesizers and soundfont preset being intended to be played later in real time by the musician.

Which is interesting is to be able to define which CC number will be used and what is the destination route of this CC (modulator or generator) for the current preset. Having this routing information being a property of the current soundfont preset allows the designer having two "hats" jobs. With the first "hat" the designer define the "preset instrument data" (i.e the sound of the instrument). With the second hat, the designer and the musician (intended to play the preset) define the "routing information" (linked modulators) intended to be selectable in real-time by the musician during the song. All these information ("preset instrument data", "routing data") are part of the preset stored in the soundfont file format.
The same application having knowledge of soundfont format can be used to aggregate these two "hats" jobs. This will eliminate others routing applications (incorporated between MIDI keyboard and synthesizers) with the difficulty to correlate different routing configurations files with soundfont files.

And I am strongly for extending and improving Fluidsynth expressiveness. And generally I'm in favour of linked modulators... I just don't know how to overcome the problem that the specification is unclear and the fact that we seem to have no real-world application for this feature.

Expressiveness supplied by the musician is captured by the first modulator which has a CC or (GC) on source 1 . For me, linked modulators are there for easy control effect purpose during real-time playing of the current note in the current preset. I have always considered this feature important and not special.

I just don't know how to overcome the problem that the specification is unclear..

Addressing modulators, soundfont specifications aren't not obvious and and this is why Emu write another document (sfapp21.pdf SoundFont 2.1 Application Note).
In this document Emu. Reading chapter "Compared to the Pros" should help to understand that Specification 2.1 should not be considered complete.

For example Soundfont spec suffers of lack of important generators like "crossfading velocity range" and "crossfading key range". Theses generators very easy to implement allows instrument zones to overlapp partially and crossfade smoothly at the beginning and ending of zones. This simple things brings a serious expressiveness enhancement by making instruments more realistic particularly for monophonic instrument playing legato.

@derselbst
Copy link
Member Author

Postponed indefinitely.

@derselbst derselbst closed this Feb 2, 2020
@jjceresa
Copy link
Collaborator

jjceresa commented Feb 7, 2020

Yes, I postponed this. Mainly because there is only Polyphone that supports linked modulators.
I noticed that Polyphone is unable to install to previous version of Windows. The authors said that this involves to much work to make installation program for Windows XP and older.
To overcome this issue, here I am adding support to swami for linked modulators.
Once this Swami support finished, I will continue this branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Proposal: Adding Soundfont linked modulators
3 participants