For practical purposes, rule of thumb, what’s a reasonable CPU usage to aim for? I’ve have a general feeling that having constant high rates is going to be trouble. I think laptops, rather than servers, are designed with user interaction in mind i.e. bursts of high resource use, periods of calm.
At the minute I’m keeping my patches below 30%. The fan isn’t going on and off all the time, that feels good to me. Bit unscientific I know, but any practical experience to share?
I only get concerned when sound starts to degrade, then I either simplify the patch, look for more efficient alternative modules (rare now as I’ve more or less arrived at a preferred module list), up the thread count or (last resort) shutdown internet access.
I also find I can get more synthesis ‘grunt’ if I use VCV to mostly just sequence & modulate other softsynths via VCV Host. For instance my typical live looping patch has two instances of u-he Hive and one of TAL Sampler driven by Entrian sequencers with a couple of u-he colour copy on aux sends, only bass & percussion (Vult) is generated by VCV modules.
Edit: I’m missing Surge synth and look forward to VCV Host supporting VST3.
It really depends on the computer, but for sure lots of laptops will go into thermal slowdown at high loads. So if you’ve found a number that works for you, go with that. And of course remember that there are really inefficient modules out there. Mine are often 8x more efficient than some comparables.
That is an odd thing about VCV / VCV users. “Normal” software does not hesitate to spin up a bunch of threads to deal with loads, but for most VCV users increasing the audio thread count is a last resort rarely tried.
By “Normal” software I mean any DAW, any video editor, plenty of non-media applications.
Yes. I’ve looked at many and I can tell you it’s almost always something really simple that is easily fixed, but clearly a lot of devs don’t bother to measure their stuff or seem to care much about this. So it’s usually pretty easy to fix these things and make a module be reasonable. Then on top of that you can go crazy and really to for it to make things even faster. Most decent devs do the first thing - get rid of the obviously slow stuff and get the easy wins on performance. It would be great if everyone took it to this reasonable level. That’s usually what I do.
Every now and then I will really go for it, just to see if I can. My least module, “Basic VCO” is an example of that - I really worked to get it as fast as I could, and… it is really fast. Oh, also my recent “Organ Three” is quite fast (they both use the same VCO under the covers).
Yeah, my modules certainly aren’t the most efficient out there… definitely something I want/need to work on. But since you asked, here some reasons why some of my modules use a lot of resources (can’t speak too much for other developers):
Accurate physics simulation: particularly in the “virtual analog” world, you sometimes need to do some complex operations, that you could probably avoid if you chose instead to design your effect entirely in the digital domain. Some examples of this for me are Chow Tape (contains a simulation of magnetic hysteresis), and ChowDer (contains 10-12 scattering junctions for simulating series/parallel connections between electrical components).
Oversampling: I like to have internal oversampling in any of my nonlinear modules, since I don’t like the sound of aliasing distortion. Essentially with 2x oversampling a module will eat up ~2x the CPU. Eventually, I’d like to have optional amounts of oversampling, so users can choose their desired balance between sound quality and CPU usage. This may seem redundant since users can use Rack at a higher sample rate, but even when running Rack at a higher sample rate, aliasing can still occur if a nonlinear module gets signal at a high enough frequency from a preceding module. Oversampling can also help with stability in some cases.
A lot of complex operations: For example, my RNN module does ~40 multiplies, plus a three nonlinear functions at every sample. Aside from using cheaper approximations for the nonlinear functions, I’m not sure how much I can do about this, but to a certain extent, I’m just happy that I’m able to get a neural network working at all in this context! (I’m also using the Eigen library which does internal SIMD parallelisation for some of the more expensive operations).
Anyway, hope this explanation helps! I’m currently trying to work on a few new modules, but after that hopefully I’ll have time to do some serious performance optimizations.
I can accept that too many blank plates is bad (particularly the ones with easter eggs) but what about scopes with pretty shapes? Omri’s scope looks like confetti and psychedelic flowers. I need three of them.
edit to add that the Nysthi scope can look like a My Bloody Valentine video from 1991. Sometimes (heh) I use three of those as well. I need more processor power so that I can actually make noises.
I know you are joking, but if you have a tolerable GPU there’s not reason an eye candy panel should bork your CPU. It should all be running on the UI thread, so it should only be fighting with other graphics for screen time. Shouldn’t make pop and click. Unless you have a really bad GPU that is sending the CPU into thermal limiting a lot.