Anti-aliasing as a discrete module?

Wouldn’t be the option to set “sampling-rates per module” the solution to all of this? I am mostly running at 2x rate of my audio settings, but it would be much nicer to have the option per module…

Well, like that video we keep seeing says, the best solution is for all the plugins that need oversampling to do it themselves. Then users don’t have to guess.

Oh, here’s the video. But you’ve probably seen it:

2 Likes

do you mean more people should have done what I did in Shaper and Saws? They both have 1X, 4X and 16X options.

1 Like

No, not quite. The exact creation of overtones to alias happens when a non-linear process is performed. Filters for example are often linear and create little to no alias (mainly in the rounding quantization).

I think the conceptually simplest is “crazy” over-sampling and convolution with a mild third harmonic distortion after regular over-sampling and a filter window in the out band with gain fitting of the distortion to minimise the out of band signal. Then subtracting the 3rd harmonic percentage would cancel the 3rd harmonic alias. Of course in the oversampling process the in band signal should be reflected and extended into the out of band spectrum when “crazy” oversampling (unlike when using normal over-sampling) to have something to subtract 3rd harmonic distortion from, and hence when reflected into the audio band the distortion out of band would cancel some in band aliases.

Is it really worth the FLOPS?

The original question was whether aliasing, coming from some process/output could corrected ‘afterwards’ by/in a subsequent process/module.

As many already stated above, before me, you would either need to know exactly what information might have been lost and/or added in a previous process (and then add/remove that) or have a clever set of assumptions ready (and apply some educated guessing) to at least ‘improve’ the incoming signal.

Just some straightforward reasoning…

  • How would you know what is ‘right’?
  • Only if you know what’s ‘right’ (and compare it to something else) you can determine what’s ‘wrong’…and possibly ‘correct’ (transitioning from ‘wrong’ to ‘right’).
  • If both ‘right’ and ‘wrong’ are known, there is no need for any ‘correction’, because you could just allways replace ‘wrong’ with ‘right’. Actually, there would not even be a need for the original source signal (which might be ‘wrong’ or ‘right’) at all, since the next stage (originally foreseen as the ‘correction’ process’) would already have all the neccesarilly knowledge to generate/construct the ‘right’ signal.

At any point in time, the amplitude of ANY frequency in the spectrum might be (part of) the ‘original’ (‘right’) signal or (part of) some ‘aliasing’ (‘wrong’) frequency or be the result of (the sum of) both. Same goes for the phases of all frequencies in the spectrum.

Even a brute force approach to ‘predictablity’, like an ‘empirical’ approach, like a full pitch sweep spectrum analysis of the output an oscillator, would not be enough. Such a sweep could sort of create an accurate spectrogram (and/or ‘wavetable’) for analysis/reference.

With such a sweep / spectrum analysis (e.g. upward), you could detect aliasing frequencies. The ‘alias’ frequencies would sweep downward from the Nyquist frequency where the ‘correct’ frequencies would sweep up (up to Nyquist). So…you would ‘know’ what to ‘expect’ and ‘correct’ for.

But even that cumbersome ‘learning’ process would only work for a static / predictable spectrum. But for many reasons ‘incoming’ waveshapes and therefore spectrums might not be so static/predictable.

Anyway…

I guess it’s a ‘mission impossible’…even by means of approximation.

Maybe not even try to fix what is broken…or even see it as something that adds to the specific characteristics…

Like the Aliasing as a distinguishing feature of the much beloved ‘Supersaw’, as originally found in the Roland JP-8000

And most accurately reproduced by

Adam Szabo - JP6K

After his extensive study/paper on the SuperSaw:

How To Emulate the SuperSaw

yes, yes. My “Saws” VCO remains pretty popular, and was made from this paper (the manual has a link to it).

I tried to make it exactly like the paper, except I added an option for 4X and 16X oversampling. 1X gives the classic sound.

I note in the manual that the swarming alias is part of the sound of that, and ppl may prefer it. Anecdotally I think I see Saws most often with the 4X option switched on.

“Who is to say what is right and wrong” is an ancient argument. In this case, aliasing is wrong. Sure there may be cases you want it, but there are many more when you don’t. And does anyone really want aliasing on every single instrument in their piece? Maybe, but probably not.

A pretty profound question indeed…

Aliasing is also considered a ‘characteristic’ characteristic of many FM/PM implementations. My old Yamaha SY77 is definitely no exception. Single notes in higher registers (especially dry, no FX) are best…well…to be avoided. Same goes for many ‘older’ digital synths.

But…when part of chords, detuned signals and/or with some FX added (chorus/reverb/phasers and such) these ‘imperfections’ often seem to quite pleasantly blend in…

Sort-of-the-same goes for the (detuned) Supersaw, where the aliased frequencies are at least partly drowned out (masked) by all other the intermodulating frequencies in the combined spectrum.

Anyway…

There is beauty to be found in the not-so-harmonic components in a spectrum. But also sheer horror. As they say: ‘Beauty is in the eye – uh, ear – of the beholder’.

As with most ‘art’, in the end, it is about intent and perception. And these concepts might clash in very unpleasant ways for one and invoke sheer bliss for another.

1 Like

Here’s another well known oscillator, where minimizing aliasing might not be the main goal in finding the ‘best’ way to emulate it…

EDIT: I forgot to mention that the waveshape/spectrum is not the same across the frequency range. An example of making prediction/correction of aliasing more difficult

Discrete-Time Modelling of the Moog Sawtooth Oscillator Waveform

Well, if you add aliasing that will make it sound less like an analog VCO, so why one earth would you add aliasing if you were trying to sound like an analog VCO?

The article doesn’t say adding extra aliasing is good. It says that filtering an simple minBLEP VCO with a first order filter will not give the best emulation of a Moog VCO.

You seem to take away from this article “the important thing is getting the non-alias part of the specturm right, aliasing doesn’t matter much”.

My take away is “given that no-one would ever emulate a moog VCO with a huge amount of aliasing, like you might get with a naïve time domain emulation, it turns out that as long as the aliasing is pretty small the test way we found to emulate it was with PD, rather than minBLEP”.

And yes, everyone agrees that you can’t remove aliasing after the fact except in a very few very specific cases.

Very valid point. I could have had a second (and possibly third) look on my hastely choosen words…and the consequent EDIT…

The article (nor I) state that aliasing is in any way good. Or that the original analog signal contains aliasing that we should emulate/maintain when moving from analog to digital…

The whole aliasing issue does not have its source in the analog world. It comes in when moving into the discrete/quantized/digital world.

Just meant to say 2 things:

  • getting it right at some criteria might mean consessions on other criteria
  • the shape (and spectrum) of an oscillator might vary over its frequency/pitch range (making after-the-fact solutions difficult)

And, yes…I should have (better) said what I meant.

Agree, yes. Thx.