Polyphony - Adjacent gates?

Hello everybody :slight_smile:

I’d like to discuss with you on a problem I’m facing these days, developing for Rack, which manage polyphony in its own way.

Coming from the VST developing world, when my plugin receive midi notes (i.e. gates in Rack), usually I manage them using an internal buffer (theoretically with infinite slots), which use a new voice if some old ones are still processing (such as releasing).

On Rack the situation is a bit different. I’m doing a sort of synth/sampler module, which will take an input poly cable (up to 16 channels, that send gates signals) and outputs a poly cable with the corresponding amount of channels, where each voice index match the input one. I see many modules works like that in Rack, so each voice can be stacked later with additional fxs (poly filter/adsr/etc).

Now internally, for each voice, when I receive a gate off, the module apply a sort of “smooth” to the processed signal (~2 or 3ms), removing clicks and artefacts when the voice stop.

The problem is when I link this module to an input seq. What if that seq (which I’ve also made) send adjacent gates? Such as: channel 1: gate on -> gate off -> wait 1ms -> gate on -> gate off -> and so on… channel 2: gate on -> gate off -> wait 1ms -> gate on -> gate off -> and so on… … channel 16: gate on -> gate off -> wait 1ms -> gate on -> gate off -> and so on…

When the synth module will get the gate off, it will start smoothing… but the smooting will result longer than the next gate on, “overlapping” it.

Three ways to resolve this I’ve in mind, which unfortunately have problems:

  1. delay the incoming gate on till the smooth of prev gate is finished - problems: delaying the voice of another 1/2 ms would create phase effects. 1ms is good, but 2 or 3 ms seems bad
  2. blend (within an internal buffer) the smoothing of the prev note with the new one, on the same voice/channel - problems: in the stacked chain, the other modules will process both prev and new signals together (they are blend), which can create bad artefacts (thing for example a compressor, that can be triggered differently due to the content of prev + new source).
  3. every time I receive a new gate, I internally use a “free” voice, so, if voice index 1 can be moved to voice index 7 if that’s free and not releasing, keeeping each signal run separately and without delay - problems: this will works only receiving up to 8 adjacent notes, because if I use 16 adjacent notes, when I gate off them all, all of them are busy and releasing, so the plug will sound nothing, just releasing.

Can’t see any other alternatives, but probably you did :slight_smile:

How would you manage this situation? Thanks

Polyphonic modules should behave like N instances of monophonic modules, so answer the question without thinking about polyphony and there’s your answer.

Just “blend” by adding (not crossfading) your two windowed signals. Consider that you have two instances of your module patched into a mixer. If you send a gate on/off into the first, wait 1ms, and send a gate into the second, I’d expect the result to be the same as if I was using a single instance.

Thanks for the reply @Vortico, as usual :slight_smile:

As said, this works up to some limit (i.e. 8 voices).

I mean, take your example, and place 16 instances of my module: what if I send 16 gate on/off on each module simultaneously, wait 1ms, and than send again 16 gates on?

What do you expect from this situation? You must delay each voice, or blend each voice. Can’t see any other possibility. Is there?

Again, don’t think about polyphony because it’s a red herring. Solve the problem in the monophonic case first, and the solution for polyphony is trivial.

Also again, I would expect the result to be the same as using two instances of your module and (unity) mixing the result. As I understand, a gate from t=0 to T will result in nonzero audio for t=0 to T+T_{fadeout}. If say, a gate comes at t=T+1ms, you’ll simply have two sounds playing from t=T+1ms to T+T_{fadeout} that are unity mixed. In the 1-instance case, you’ll just have two samples or oscillators or whatever running.

I do: the example posted above is using monophonic module :slight_smile:

This is what I meant with “blend”. i.e. add the two signal together, for the time amount in which they play together.

BUT: is that really good? Keeping in the monophonic scenario: the output for t=T+1mst=T+1ms to T+TfadeoutT+T_{fadeout} have mixed signal (as you said), and later in the chain I could process “both” the signal (single cable).

I mean: what if I’ve an ADSR linked and I trigger it with the second gate? It will also process “part” of the previous gate.

Someone could say this is a problem :slight_smile:

I don’t see why that’s a problem. It’s an expected consequence of fading out a monophonic voice.

Suppose you made a subtractive mono voice with a VCO, EG, and VCA. If the gate turns HIGH before the EG has reached 0 from the previous gate, the EG simply opens starting from its current position. So why not just do that? Internally you’d have an EG with ~2-3ms attack and release, and when a gate input turns to high, immediately switch to playing the requested sample.

1 Like

Probably because I’m used to work with VST, so each voice got each “scope” and EG/Filter chain, without unity mix each others :slight_smile:

switch to playing the requested sample? What if my EG is in the middle of release stage, last sample (with smooth applied ) is -0.4f and the new one start to 0.0f? Switch 0.4 to 0.0 could be a click.

A sort of internal buffer (for each voice) I think its still required, isn’t?