how to make a musically effective quantizer

Quant has an **Equi-likely mode.

I think it’s clear, from the above, that different quantizers had different algorithms that have different effects on there the switchover points are and how efficient they are. It sounds like you know what you want, so go for it.

Another aspect of “correctness” is hysteresis on the input. I don’t know if any others have done that, but it seems “bad” if a CV that is right on the border could wildly switch between two notes just because of a tinny amount of “noise” on the input.

I will “warn” you - I spent several years making VCV plugins, and one of my main “selling points” was quality. It was not super easy, as others can tell you. Many VCV users don’t care about quality, many VCV devs are [all over the place?] with regards to quality, any you must be very careful pointing out flaws in the competition, or you will be threatened with perma-ban, or at least be a very unpopular person :wink:

btw, @k-chaffin 's suggestion to not run your quantizer every sample is a very good one. This is a technique that is often used to save a huge amount of CPU with close to zero effort. Of course you may not want to code in that “limitation”. I have seen module that give you a choice.

I wrote an article a long time ago about efficiency in VCV. It sounds like you know all of this and more already, but here it is: Demo/efficient-plugins.md at main · squinkylabs/Demo · GitHub

1 Like

Thanks to everyone. My mathematical self says there is a correct way. But if I remind myself we are talking about music, I can appreciate a different genre even if it is not my preferred genre. I expanded my patch to add more quantizers and a four-view to visualize the differences. Most instructive were Lars’ pointer to the different modes/algorithms for ML quantizer, and Squinky’s ever practical “no one right way” and “if it works”.

Curiously when ready Squinky’s efficient plug-ins, and drilling down in the code to process on every fourth call, I started geeking out (thanks Squinky) and asked myself, is that the fastest way to do that small thing. I tried four combinations

if (--c < 0) c = 3;    // 1
if (++c > 3) c = 0;    // 2 
c++; c = c % 4;        // 3
c++; c = c & 3;        // 4

I ran it through 100 billion iterations each. I really felt skipping the comparision and the branch would help more than it did. #4 was the fastest (though only works for powers of 2), followed by #1 which was easier to read and more general purpose at 0.19% slower. What surprised me was #2 which was 49.6% slower. #3 (with the division) was 82.7% slower. Good stuff.

4 Likes

Cool! Whilst you’re down that rabbit hole
 Apart from the different existing modes, one additional quantization mode I’ve wanted for a long time, and that I haven’t seen implemented, is a mode that could be called “don’t repeat”. So it basically remembers the last quantized note and if the next quantized note is the same as the last, instead of repeating that one it chooses the other closest one. I think that could be a really nice/interesting and musically satisfying mode. Wink wink nudge nudge :slight_smile:

3 Likes

The Frozen Wasteland Probably NOTe quantizers don’t repeat pitches, or at least the Math Nerd (which I use) doesn’t

1 Like

I’m guessing ‘c’ is a signed integer? If it was unsigned then 3 should be just as fast as 4. It’s not a division that makes it slower as that gets optimized away, but having to handle the case where ‘c’ is negative.

If you try these snippets out on https://godbolt.org you can see what the different compilers and levels of optimization do to each one.

I’d have to go with Knuth on this one. The final CPU usage of this module is not going to be affected at all by micro considerations like this. I assume everyone here knows the Knuth reference, but for any that don’t, just google “premature optimization is the root of all evil”.

My compiler professor always warned about premature optimization, and I think that is mostly true. But I still want to believe in my humanity, and to that point, sometimes if I do something stupid, the compiler will not save me, and the best way to prove something out is to just run tests.

For this particular exercise of trying to get things as small as possible I tried to do zero scale calculations while processing. The scales are preprocessed and the only thing I am doing is converting a pitch value into an integer for a lookup. I started with something simple like the first snippet below (where halfsteps and voltages are those preprocessed member variables).

float getClosest(float voltage) {
    float octave = std::floor(voltage);
    float pitch = std::abs(voltage - octave);
    int index = pitch * halfsteps;
    return octave + voltages[index];
}

Turns out that floor() and abs() are pigs - for good reasons (they are built for the general case and deal with errno and “stuff like that”). But if I know my input is valid and constrained, then I can cheat a bit, which got me to here.

float getClosest(float voltage) {
    float octave = (float)(int)(voltage + 10.0f) - 10.0f;
    float pitch = voltage - octave;
    if (pitch < 0) pitch = -pitch;
    int index = pitch * halfsteps;
    return octave + voltages[index];
}

Using truncation on negative numbers doesn’t work, but I know I am ± 5V. Truncation between an add and subtract is a fast floor(). Same thing on abs(). That small piece of code (the first few lines leading up to the index) now runs in 1/10th the time. I did one more iteration converting the octave variable to an int which made that portion of the code another 23% faster.

float getClosest(float voltage) {
    int octave = (int)(voltage + 10.0f) - 10;
    float pitch = voltage - octave;
    if (pitch < 0) pitch = -pitch;
    int index = pitch * halfsteps;
    return octave + voltages[index];
}

I tested it across different optimization levels (though nothing “unsafe”). It is microseconds at best. But no amount of optimization by the compiler got me to the same place and there was no way the compiler was going to save me from floor() or abs().

It is currently a function in a class/struct for the scale - which I consider best practice for encapsulation/decomposition. But the overhead of making the call is twice the actual body of the call (so 2/3rds of the cost). Perhaps inlining is best left to the compiler - though it makes a huge difference on the performance tests.

OMG. Super love that site. Didn’t know it existed. Thank you !!! Though I wish it had a couple more compilers, and I haven’t figured out how to get references to std:: functions to compile - yet.

Perhaps a different take on “musically effective” - but I think every quantiser should include a trigger out for when it changes note (like ML Quantum).

3 Likes

For me the rabbit hole started at arpeggiator. I wanted to do a couple of different things (like an Alberti bass), and thought I would make a generic one that did patterns like 0+1+1+1+1 for up, or 0+2-1+2-1+2-1+2 for a sort of alternating walk up, and for Alberti, 0+2-1+1. And then allow user patterns. And again, I want to do it efficiently. One arpeggiator I found was showing 60% on the built-in meter.

I start learning about polyphony, and coding triggers/gates, etc. Then I want to combine things to avoid intermodule trigger/s&h timing issues, and I start building a kitchen sink. At the same time, my OCD wants to build primitives which are provably correct and will allow me to mix and match.

To me, what you describe is more of a sequencer/arpeggiator function - requires a trigger and sample and hold support. But definitely not a pure quantizer. I know I need to be more open minded.

And then of course I am like I don’t want to build this into everything, and I am like can I do an expander thing where I slap a quantizer up against a sequencer or arpeggiator. But first things first. I’ll finish up my quantizer. Go back to my arpeggiator. And THEN put some chocolate in the peanut butter.

1 Like

I was going to point that out, I’m more likely to use Equi-likely mode.