I have done extensive development using GPU parallel computing, mostly via NVIDIA CUDA and Microsoft Direct Compute and even mixtures of the two. Most of this work was involving graphics, but some general purpose simulations such as computational neuroscience hippocampus modelled vision processing. With clever coding, virtually anything can be done. But, that comes with a tremendous development work that ends up being OS limited, unless OpenCL is used. The results can be absolutely astounding on a high end GPU (I currently have an NVIDIA GTX 1080 GPU but have a 3 GPU NVIDIA Tesla board supercomputer sitting unused in the corner since I was outdated within a year of when I built it in 2009). Probably the most impressive thing I have done is implement procedural textures in CUDA, and call that from Unity game engine. And even call the hippocampus simulation from Unity using R32G32B32A32Float textures as generic data objects,. passed back and forth between Unity and my CUDA DLL by pointers. I’ve dabbled with audio processing, but, that was much less successful than graphical processing.
My current Meander module under development uses the same “fractal Brownian motion” (fBm) PRNG as my CUDA and DirectCompute parallel procedural textures, but does it all on the CPU. But, in this case, I’m controlling note generation via the results rather than synthesizing or modifying audio.
GPU parallel computing is very fun for me, but also among the most challenging things I’ve even coded. The idea of doing something like this within VCV Rack crosses my mind every day, but I don’t think it is a realistic or necessary direction to go with audio.
I’m pretty sure that everything I have done via GP-GPU programming has been with scalars.