I’m interested in understanding a little more how the VCV Rack code (not the modules but the core engine) works. I’ve been browsing the code a bit, but I was wondering if anyone has any advice for which files to concentrate on, and/or if there is a discussion of the VCV architecture somewhere?
The rtaudio callback processes blocks (e.g., 64, 128, …) of samples, but it looks like modules process a single sample with each call to their process function (is that correct?). If so, I would need to internally buffer the incoming samples if I wanted to write a module that used the FFT, for example. But then chaining several such modules would make their latencies add. I guess the same thing would happen in a real eurorack system…
Is the DSP graph organized so that module outputs are computed in a certain order (I couldn’t ascertain this)? This would seem to be important for block based process (but maybe not if modules process one sample at a time)?
Anyway, I’m just trying to get to grips with how things are done VCV-style, so any comments/insight are most welcome.
there is no BLOCK concept in the PROCESS for a single module
the module processes a single SAMPLE
YES if you need blocks to do some DSP processing (example an FFT) you need an intermediate buffer
and YES the buffer will introduce a delay
and YES if you chain the OUT of module using a block process to another IN of another module doing block process you are getting the sum of the 2 blocks as sample delay
the GRAPH is free, there is not a real hierarchical dependency between modules (like in a fixed block graph with timestamping) (like the AU graphs in OSX for example)
the engine threads (now they are multi)
are saying to ALL the modules present in the current RACK scene “PROCESS”
all the modules do the PROCESS and move the results to the OUTPUTS
a secondary LOOP traverse all the cables and moves all the OUTPUTs to the INPUTS
Thanks for the detailed response – that helps a lot and more-or-less confirms what I understood about the code.
I wonder why the architecture for modules wasn’t made to be block-based rather than sample-based (other than ease of implementation), since this could get rid of the additivity of latency when multiple block-based modules are chained together?
at start I had the same doubts
how can an architecture “single sample process” can deal with the overhead of the call chain ?
In my mind the block processing was so superior
so I did my first real test, I’ve ported an open source reverb, called MVERB, and I was astonished: it worked (referred the the single sample process…) !
I was in awe!
and @Vortico explained that he was leveraging a lot on the enormous amount of internal and fast cache the modern processor have and ability to have some immutable code loaded in the pipelines … (or something like that… )
The idea of single sample processing is an interesting one, and it obviously works (with VCV rack as a nice existence proof).
I think it simplifies things too, since the specific ordering of modules is not important.
On the other hand, it means that the latency introduced by a chain of block-based modules is additive. Imaging having a chain of 10 modules each of which performs an FFT using the same blocksize used in the rtaudio callback. Each module would have to internally buffer blocksize samples, and the overall latency of this chain would then be 10*blocksize. Of course, with real digital eurorack modules the case would be no different.
On the other hand, with block-based module processing the overall latency would just be 1*blocksize since no internal buffering is needed. But the ordering of modules in the DSP graph becomes important in this case…
I’m not following what you mean there… (But yes I’m referring to the case of trees or forests, i.e., with no feedback). The blocksize delay can’t ever be removed though since we need to buffer a certain number of samples, e.g., with a block based processor like the FFT
Yes, this is what I meant. But a modular system that only allows wiring in form of trees or forests would be quite… boring, I think. Isn’t this the limitiation that most DAWs use for VSTs etc.?