Self-oscillating SVF Questions

I have a use case where I need:

  • to be able to run a number of state variable filters SVFs (say 6-8ish), so performance is particularly important
  • they should be capable of self oscillation, specifically in a stable/controlled way (so they can reliabily generate a X V pp sin, or sin+harmonics wave)

However I’m mainly used to working on oscillators (or CV/gates), so I don’t have so much experience in filters, at least self-oscillating ones, and I’d like to make sure the implementation is bombproof yet performant. So below is a series of questions to improve my understanding, specifically thinking about the fact that there is a non-linearity (so some of the theoretical results for linear systems no longer hold I think). As for what I’ve explored so far:

Option A: I’ve implemented the filter as described in Fig. 21 of Digital Sound Generation (x2 oversampled). Empirically this appears to be stable up until fc = 0.4, and seems to work pretty good for my application.

Option B: Alternatively I’ve come across the excellent SVF by @janne808 (link) which comes with Semi-implicit Euler (needs a code modification to actually use), Trapezoidal, and Inverse Trapezoidal methods, and variable oversampling/decimation. This appears to be based on Fig. 6.51 of VA Filter design. Depending on the settings (method and oversampling rate) the CPU usage ranges from 0.5 to 11% on my M1 Mac.

  1. How should I evaluate the filters quality? Alternatively how would you stress test a filter to evaluate it? Filter sweeps, audio rate FM, pinging, alias checks? What other tests should I be doing? Clearly from option B there is a accuracy/speed trade-off - what minimum levels of quality would the various members of this community expect? I’m personally OK if the filter is capped at fc = 0.4 (17.64kHz at 44.1kHz sample rate) but are you? I can’t really tell the difference between Semi-implicit Euler (0.5% CPU) and Inverse Trapezoidal (11% CPU), or at least say which was “better” but again maybe I’m not doing the right tests.
  2. During early phases of development the filter would blow up - whilst option A appears stable now (as long as below 0.4) should I still put some sort of limiter on the output in case?
  3. If user will use ludicrously high sample rates (again this isn’t something I do but the Vult thread made me think I’m not thinking carefully enough about this issue), should I do anything differently? E.g. doing internal calculations as double?
  4. One for @janne808 (or anyone else who knows) - there is a damping factor beta on the bandpass term, but I can’t see this referenced. Is this just added emperically? Any reference for why it’s there?
  5. Any other bombproof self-oscillating SVF implementations I can take a look at for inspiration?

Big list questions, so only have a pop at the bits that interest you! Thanks in advance!


The Cytomic SVFs are quite nice. I have an implementation here, which is slightly different from the way it’s derived in the paper, but gives the same result. My version is not really designed to be used for self-oscillating, but you can sort of do it you make the Q value really large. If you’re looking to do self-oscillating, it’s probably best to direcly control the “damping” factor k rather than Q.

Werner and McClellan’s SVF from this paper is also cool, since they have an extra parameter which controls low-frequency damping. Depending on what the input is to your self-oscillating filters, having the low-frequency damping can help to isolate the resonant frequency. For my implementation I ended up doing my own derivation rather than using the one in the paper. Compared to the Cytomic SVF, I think the performance of processing samples is similar, but computing the filter coefficients is definitely faster with the Cytomic SVF.


Amazing thank you for the response, some interesting reading there!

Yeah I have been playing with @andy-cytomic’s SVF design (and a few others including Will Pirkle’s) - indeed at Q → inf / k → 0 limits you can get self oscillation but it doesn’t appear reliable enough for the application (nor would I expect to be I think). Specifically what I mean is depending on the magnitude of the noise/signal that is used to trigger the initial instability, you can get wildly differing amplitudes of the self-oscillating tone out (including unbounded where it just blows up eventually). If I understand, this is why filters with nonlinearities in the feedback loop are required if you want to control/tame the self-oscillation. The two options in the OP both allow “negative” resonances (which would otherwise certainly blow up), but control it via non-linearities in the feedback path. It’s this “control” of the self-oscillating signal that has led me to favour them so far.

It’s not clear to me exactly how I would modify, say, the Cytomic SVF to add this nonlinearity, but I will think about it. The recent Wasp emulation must have the non-linearity as well but that will be part of a more complex circuit model no doubt.

I’m definitely in over my head in terms of the theory (and would like to fix that, so am slowly going through VA Filter Design), but thought this would be an interesting discussion for the forum here.

EDIT: this explains it more clearly Screenshot 2022-07-26 at 15.32.19

Yes the damping/loss, it’s there to make the low frequencies behave better, similar to an electronic circuit which isn’t built using superconductors.

1 Like

surprised no-one has mentions the basics. most filters the self oscillate have a small amount of noise added to kick start them.

1 Like

Agreed this is required, and in all the examples I’ve tried, I’ve been adding noise of the order 1e-6 → 1e-5.

To attempt to answer one of my own questions, the various integration methods have different numeric intrinsic stability limits. The explicit handling of the nonlinearity for Semi-implict Euler method means that relatively conservative limits are placed on the max f_c whereas the other methods permit higher f_c. Oversampling by x N also allows increasing max stable f_c by x N. So for an application with a target max filter cutoff f_c_max you can trade off integration method/over-sampling rate in a way to minimise CPU.

Have a look at the talk I gave at ADC, and also the accompanying slides for how to add non-linearities to filters: Technical Papers – Cytomic

For an SVF you just need slightly negative k, a nonlinearity in parallel with the k feedback to boost this signal (adding more damping), and a little noise at the input, or possibly non-zero c1eq and/or c2eq if it’s for a drum synth type thing where you want a click / thump.

PS: these are the equations to solve for the SVF with damping non-linearity :-

v0 == vin - ((kg v1 + kfeq) + (k v1)) - v2
0 == -g (v0) + (v1 - ic1eq)
0 == -g (v1) + (v2 - ic2eq)

kf = f(v1)
kg = f'(v1)
kfeq = kf - kg v1

trivially solving for the system of equations gives:
v0 = (vin - ic2eq - kfeq - ic1eq (g + k + kg))/(1 + g (g + k + kg))
v1 = g v0 + ic1eq
v2 = g v1 + ic2eq

and the state update is the regular:
ic1eq += 2*(v1 - ic1eq)
ic2eq += 2*(v2 - ic2eq)

The non-linear functions you want to try start at f(0) = 0 and get bigger as x moves away from zero.

1 Like

The VCV Rack A-124 Wasp filter models 10 major non-linearties in the analog SVF, including 2 OTA macro models, 6 mosfets, as well as the 2 resonance limiting diodes. There are 25 simultaneous equations being solved, and at each sample multiple iterations are used to converge on the solution. Also being modelled are 16 resistors, 2 potentiometers, 5 capacitors, and 6 voltage sources.