Are vector-based types of processing with GPUs viable in VCV?

Hi there,

I have a feeling that many VCV Rack users have both powerful CPUs and GPUs at their disposal, where GPUs are rarely stressed.

I’m just starting out with signal processing, much of it seemingly expressed through linear algebra. One side of my brain tells me “GPUs can process vectors relatively faster than CPUs, and DSP can be largely expressed through vectors”. The other side goes: “DSP concerns solutions of discrete information, integers, which is the lifeline of CPUs, not GPUs.”

Anyone care to enlighten me on this?

Using GPUs for audio processing tends to be a disappointing experience because it’s so expensive (slow, therefore causing latency issues) to transfer the audio between the main CPU/memory and the GPU. Also, like the poster below mentioned, VCV Rack is already using the GPU quite a lot because its GUI is based on OpenGL. There’s a lot of variability between the GPU hardwares and drivers, which can cause problems.

2 Likes

I didn’t really understand much of that but I can assure you that many people here have humble PCs. Mine is about 6 years old and was mid tier when i got it. Others use laptops. VCV uses a lot of gpu because it does it’s UI with OpenGL not because the audio processing is hard. thanx

I see. I’m a bit blind-sided by having newest from NVidia, seeing the GPU usage go very low, also in my previous job some datasets were processed by PCs with like 4 GPUs because CPUs didn’t cut it.

Thanks for the answer!

Yep, makes sense when I think about why VCV rack was invented to begin with. You’re probably right

I disagree with this. Almost nothing in audio can take advantage of linear algebra or massive parallel computation, compared to all other fields. Do you know of such an algorithm?

Using GPUs for non-graphics computations is feasible if there are no realtime constraints. With audio the data needs to be calculated really quickly. In a typical DAW situation the output audio buffer needs to be ready in about 10 milliseconds and you ideally don’t even want to come very close to that deadline.

I’m brand new to DSP and audio, I ment I’m noticing the literature where topics are framed through linear algebra. That’s what led me to ask if GPUs could be viable in something like VCV since they’re good at that type of processing.

1 Like

What type of algorithms are you reading about in literature? Convolutions and ODEs for example can be stated as matrix operations, but generally they’re not solved like this because of latency and/or better algorithms exist.

Good question, I might even be using the wrong “starting literature”, heh. In “Signal Processing for Communication (2008)” for instance, many (or most?) topics are expressed through stuff like scalars, (Hilbert) spaces, changes in basis, complex-valued sequences/functions, i.e. linear algebra. I’ve dabbled with linear algebra before, so I figured I’d continune learning this way.

Machine learning might be the thing that breaks GPUs into audio applications. They’re not really useful for traditional signal filtering for reasons mentioned on the thread but that doesn’t mean they’re not useful in other areas and I would say they’re largely unexplored at this point for audio processing.

what he said.

1 Like

I have done extensive development using GPU parallel computing, mostly via NVIDIA CUDA and Microsoft Direct Compute and even mixtures of the two. Most of this work was involving graphics, but some general purpose simulations such as computational neuroscience hippocampus modelled vision processing. With clever coding, virtually anything can be done. But, that comes with a tremendous development work that ends up being OS limited, unless OpenCL is used. The results can be absolutely astounding on a high end GPU (I currently have an NVIDIA GTX 1080 GPU but have a 3 GPU NVIDIA Tesla board supercomputer sitting unused in the corner since I was outdated within a year of when I built it in 2009). Probably the most impressive thing I have done is implement procedural textures in CUDA, and call that from Unity game engine. And even call the hippocampus simulation from Unity using R32G32B32A32Float textures as generic data objects,. passed back and forth between Unity and my CUDA DLL by pointers. I’ve dabbled with audio processing, but, that was much less successful than graphical processing.

My current Meander module under development uses the same “fractal Brownian motion” (fBm) PRNG as my CUDA and DirectCompute parallel procedural textures, but does it all on the CPU. But, in this case, I’m controlling note generation via the results rather than synthesizing or modifying audio.

GPU parallel computing is very fun for me, but also among the most challenging things I’ve even coded. The idea of doing something like this within VCV Rack crosses my mind every day, but I don’t think it is a realistic or necessary direction to go with audio.

I’m pretty sure that everything I have done via GP-GPU programming has been with scalars.

2 Likes

Kinda wish that next to the heart for like there was an icon for “impressed” for me to click.

DSP is a big field. Quite a lot of it uses linear algebra at a theoretical level at least. But the specialized subset that applies to Rack - real time DSP for playing and simulating instruments - really doesn’t at runtime. Also as folks noted, the hard part about GPU programming is getting your problem onto and off of the GPU. GPU does well when your question and answer are “small” but your intermediate calculation is “big” (so multiply this small vector by this series of massive constant matrices and give me back a scalar; and do that a squillion times; you load the matrices once and since the vector and scalar are small the IO is dwarfed by the compute). The type of signal generating and processing rack does has the exact opposite shape. Basically most of the time is spent pushing one sample through a big heterogeneous dynamic code path which is in the instruction cache of the CPU (if you are lucky) but not really amenable to the GPU even if you could get the samples there quickly enough.

If you had some super clever calibration of an algorithm (like say you used tensorflow to learn how to simulate something and then you had a set of parameters that were your DSP runtime) perhaps that model evaluation could be on the GPU every 1000 samples or some such. But I’m just making things up at that point.

BUT a very interesting idea is the ROLI language SOUL (https://github.com/soul-lang/SOUL) where essentially they are trying to do something equivalent to GLSL for audio. You can write your code in a way that it runs at different parts of your hardware chain. So the analogy is GLSL means you write code which compiles into your graphics card for rendering; SOUL means you write DSP code which compiles into your speakers for effects. I’m definitely paying close attention to that, and it is the most likely thing I’ve seen to get something like Rack an experience which is something-like-a-GPU-for-graphics-just-for-audio.