I’ve been thinking recently about the feasability of encapsulating a minBlep anti-aliasing implementation that uses lookahead to determine the location and size of discontinuities into a separate module instead of baking it into modules with other featuresets.
Before I set out to begin experimenting this, I just wanted to check if there’s some fundamental reason as to why this wouldn’t work? If so, how come?
Fundamentally each sample is a step that can be calculated in the following machine and an oversampling minBLEP can be applied. When this is reduced back to sample rate it may have a slight reintroduction of alias as the smoothing process of undersampling may be lossy of information.
But in principle yes.
EDIT: For example generating a 3rd/5th/odd Aliased Harmonic to subtract to minimize the out of band spectral energy before the minBLEP filter? Some clever algorithm could estimate the polynomial non-linearity of the previous modules.
I suppose the fundamental question I’m asking is: if I were to plot the data before and after I write it to an output of a module, would there be any difference between the two plots?
Do you think about a module that removes aliasing from an audio input?
Yeah, I’ve been thinking of methods of applying minBlep anti-aliasing to FM-modulated oscillators, where discontinuities can’t easily be predicted. So I had the thought of using lookahead to detect where the discontinuities are, and how big they are. Then I wondered why this functionality would have to be baked into a module, rather than being a discrete module.
If aliasing is part of an audio stream, I don’t know a method to remove it, because the frequencies higher than the Nyquist-frequency are already mirrored down at the Nyquist-frequency. How should minBlep help here?
So the answer to this question is that writing data to an output port of a module causes non-reversible modifications?
Writing values to variables when not using oversampling causes the upper “side-bands” to be truncated and folded into the “audio band”. This is technically irreversible but might be “kind of” fixable by some neat tricks.
IMO aliasing is irreversible.
I would recommend watching some YouTube videos on the Nyquist-frequency (or reading articles if you prefer).
It is going to be impossible to have a general module to antialias any audio signal.
A simple analog is to think about modulo (the remainder operation). Imagine a system that takes an input of any value, then mods it by 10. Then try to make a module undo that operation. You can’t. You can only guess at what it might be. Ex. 17 mod 10 = 7. But so does 37, and 107 etc. There is no way to know what the real input was.
I understand that removing aliasing from an audio signal is impossible. What I was uncertain about is when exactly an audio signal is generated.
The answer is clear now - an audio signal is generated each time data is written to any module output. The alternative would be that an audio signal is only generated upon sending data to an VCV Audio module, and that any other inter-module communication is raw data.
I’m not sure I understand that question though. What is the difference between an audio signal and raw data? I think what your getting at with your answer is there isn’t one. Both are conceptualized as voltages in VCV rack (and in euro rack). And VCV represents those continuous voltages as discrete floating point values.
I don’t know the difference myself, but logically, there has to be a difference. If I were to use minBlep to anti-alias a sequence of data, I would do that before writing to an output port. This would result in a reduction of aliasing artifacts of the resulting signal.
If the data going out of a module were the exact same as the data being fed to an output port (basically, the data before and after executing “setVoltage()”), I see no reason as to why I couldn’t make an aliased oscillator in one module, and then encapsulate the anti-aliasing algorithm in the next module.
But maybe there’s something I’m fundamentally misunderstanding here?
You can put milk into the tea. But you can’t get the milk back out.
Ah, ok. Your reasoning is correct - it you can determine the magnitude and the sub-sample time of the discontinuity, it will work. The problem is that you can only determine these things if you know exactly what the generator did. So, with a naïve sawtooth, this might actually work. But that’s not very interesting - if you want a sawtooth VCO there are plenty around that already use minBLEP.
What about a Square wave? how will you know at what point in time the discontinuity occurred? What if the VCO has some filtering on the output, then how can you know?
Basically the Q is “If I know exactly what aliasing is there, can I remove it”? The answer is probably that in many cases you can, but in almost all cases you don’t know exactly what aliasing is in there.
But couldn’t you use lookahead to analyze the signal ahead of time, find discontinuity points and their magnitude, and then use that data to apply anti-aliasing to a delayed signal?
no. if it’s filtered it will disguise the magnitude and you won’t be able to tell which is which. And in any case, the sub-sample time of the discontinuity is more difficult. There is no way to do this without having a perfect model in your head/code of what the source is.
But, if you really don’t believe ppl, why don’t you code it up and see how it works?
I’m sorry if I’m coming across as crass and argumentative. I’m asking questions in order to figure out where my understanding of how things work is incorrect. I’m not necessarily attempting to convince others, but rather I’m looking to be disproven.
Your explanation is succint, and clearly demonstrates why it won’t work. I appreciate you taking the time.
No problem. You didn’t sound argumentative, just persistent And, anyway, it was a fun breakfast-time thought exercise.