DanT.Synth Noodles

Yeah, I maintain a circular buffer of 1024 samples, but 512 should be fine as well. I also maintain a running total (5 of them actually, Σx, Σx², Σy, Σy² and Σxy) Each sample I subtract the oldest sample value from the accumulators, then store the new sample in its place, and add that to the accumulators. This avoids adding up all 1024 values over and over again.

There is a small but non-zero possibility of a cumulative error building up in the accumulators due to rounding errors, especially if the signal wavelengths are some multiple of the buffer length, so I also maintain a second set of accumulators without the subtract stage, and every 1024 samples I swap in the values from those and reset them to 0. But I think you’d have to run the device for a long time before such compensation is really necessary.


I suspect I’m doing the same statistical correlation as you, and it can be formulated just using the running totals:

From wikipedia - Correlation and dependence

1 Like

Ok I took a very quick look at your source, seems like we are doing pretty much the same thing, but you have a better way to accumulate the sums.

I think I may have thought the buffer needed to be cleverer that it does, as I originally started with a ring buffer, but changed to the array when I needed the iterators for the accumulator

I should just keep it simple stupid

Thanks for the help, I think I know how to improve the code now, and good to know that, at least in theory, I am doing the right calculation


If anybody was confused by my term pipeline accumulator.

The circular buffer is used as a fifo pipeline. We are not interested in anything inside the pipe, only in what is at the ends.

As stuff goes into the pipe; we add it to the accumulator. We subtract whatever come out of the other end. Hence the accumulator always reflects the sum of the pipe’s contents.

1 Like

Thanks for the clarification, again, I am a C++ novice, but I think generally a FIFO buffer would typically be called a Queue, whereas a pipeline could be confused with |, or file pipes, or function chaining (similar to fluent type implementations).

Which actually does help me a lot…

I believe I could implement a variable window size for my correlation module by simply keeping a running sum, and then each step either;

continue to remove samples from the queue and the sum until the window is reduced to the correct size

or do not remove the sample from the queue or sum and let them fill up to the correct size.

I don’t know where you saw the recommendation of 512 samples, but presumably it is linked to engine sample rate. So a variable length buffer is useful, and I would implement it as you just suggested. Using the queue means that a longer buffer does not increase the workload.

1 Like

Still haven’t got back to my experimental new modules, but I did create this generative patch that I could listen to (and am listening to) for hours and hours…

Getting excited for V2, also picked up the VCV Drums that I am really enjoying…