Max practical CPU usage to aim for?

For practical purposes, rule of thumb, what’s a reasonable CPU usage to aim for? I’ve have a general feeling that having constant high rates is going to be trouble. I think laptops, rather than servers, are designed with user interaction in mind i.e. bursts of high resource use, periods of calm.

At the minute I’m keeping my patches below 30%. The fan isn’t going on and off all the time, that feels good to me. Bit unscientific I know, but any practical experience to share?

1 Like

What makes you think that? If your cooling works properly, constant high cpu usage over longer periods shouldn’t be a problem.

My practical experience is that it doesn’t matter, I’ve played Rack and similarly processor intensive games for hours without stopping and never had problems.

…years of experience in IT and the many ways computers can let you down!

As much as the stress, I guess I’m thinking of the overheard the OS needs that might cause stutters and glitches. Certainly experienced that with Draws

Thanks for the reply, it’s the actual experiences that count

1 Like

I only get concerned when sound starts to degrade, then I either simplify the patch, look for more efficient alternative modules (rare now as I’ve more or less arrived at a preferred module list), up the thread count or (last resort) shutdown internet access.

I also find I can get more synthesis ‘grunt’ if I use VCV to mostly just sequence & modulate other softsynths via VCV Host. For instance my typical live looping patch has two instances of u-he Hive and one of TAL Sampler driven by Entrian sequencers with a couple of u-he colour copy on aux sends, only bass & percussion (Vult) is generated by VCV modules.

Edit: I’m missing Surge synth and look forward to VCV Host supporting VST3.

It really depends on the computer, but for sure lots of laptops will go into thermal slowdown at high loads. So if you’ve found a number that works for you, go with that. And of course remember that there are really inefficient modules out there. Mine are often 8x more efficient than some comparables.

my mobile workstation (dell precision 7710) sometimes stays at 100% cpu + high gpu usage for 3 hours flat when working on photogrammetry…I can cook aubergines on it :upside_down_face:

1 Like

That is an odd thing about VCV / VCV users. “Normal” software does not hesitate to spin up a bunch of threads to deal with loads, but for most VCV users increasing the audio thread count is a last resort rarely tried.

By “Normal” software I mean any DAW, any video editor, plenty of non-media applications.

1 Like

I’ve started to notice that already. It’s not always obvious why some modules should need so much resource and others do a lot with little. I’ve not got into looking at the code yet

Beyond 100% i get a warmer analog sound :smiling_imp:

7 Likes

Yes. I’ve looked at many and I can tell you it’s almost always something really simple that is easily fixed, but clearly a lot of devs don’t bother to measure their stuff or seem to care much about this. So it’s usually pretty easy to fix these things and make a module be reasonable. Then on top of that you can go crazy and really to for it to make things even faster. Most decent devs do the first thing - get rid of the obviously slow stuff and get the easy wins on performance. It would be great if everyone took it to this reasonable level. That’s usually what I do.

Every now and then I will really go for it, just to see if I can. My least module, “Basic VCO” is an example of that - I really worked to get it as fast as I could, and… it is really fast. Oh, also my recent “Organ Three” is quite fast (they both use the same VCO under the covers).

I wrote this paper maybe two years ago (?) but it still holds up pretty well. It’s mainly about getting to that step of not doing anything bad: https://github.com/squinkylabs/SquinkyVCV/blob/main/docs/efficient-plugins.md

4 Likes

Yeah, my modules certainly aren’t the most efficient out there… definitely something I want/need to work on. But since you asked, here some reasons why some of my modules use a lot of resources (can’t speak too much for other developers):

  1. Accurate physics simulation: particularly in the “virtual analog” world, you sometimes need to do some complex operations, that you could probably avoid if you chose instead to design your effect entirely in the digital domain. Some examples of this for me are Chow Tape (contains a simulation of magnetic hysteresis), and ChowDer (contains 10-12 scattering junctions for simulating series/parallel connections between electrical components).

  2. Oversampling: I like to have internal oversampling in any of my nonlinear modules, since I don’t like the sound of aliasing distortion. Essentially with 2x oversampling a module will eat up ~2x the CPU. Eventually, I’d like to have optional amounts of oversampling, so users can choose their desired balance between sound quality and CPU usage. This may seem redundant since users can use Rack at a higher sample rate, but even when running Rack at a higher sample rate, aliasing can still occur if a nonlinear module gets signal at a high enough frequency from a preceding module. Oversampling can also help with stability in some cases.

  3. A lot of complex operations: For example, my RNN module does ~40 multiplies, plus a three nonlinear functions at every sample. Aside from using cheaper approximations for the nonlinear functions, I’m not sure how much I can do about this, but to a certain extent, I’m just happy that I’m able to get a neural network working at all in this context! (I’m also using the Eigen library which does internal SIMD parallelisation for some of the more expensive operations).

Anyway, hope this explanation helps! I’m currently trying to work on a few new modules, but after that hopefully I’ll have time to do some serious performance optimizations.

Thanks, Jatin

6 Likes

On my system rack doesn’t like approaching 60% as shown my linux task manager, that is when the audio becomes slow and messed up.

1 Like

Those seem like reasonable reasons, when module is advertised as doing some interesting and complex calc, I’d expect it to be a bit resource hungry

1 Like

Thanks that’s really useful. All makes sense even though I’ve not looked at any DSP code yet, but from other mathematical modelling I’ve done. Although the thread locking issue i’d have to read up on.

I don’t worry about it at all until performance degrades in some way - then I either up the threads or record stuff down and stick it in a sampler/DAW if I’ve already maxed out the threads.

3 Likes

That won’t be too difficult! I just googled “audio thread non-blocking” and the first few hits looked very informative.

You really have to figure out the cutoff point for your own system and get a feel for what’s expensive, the numbers from F3 mode and from the system monitors aren’t telling the whole story.

Alternate answer: refer to the documentation of my modules, one of the pages details the community consensus about this issue:

8 Likes

I think you win the Internet today.

2 Likes

I can accept that too many blank plates is bad (particularly the ones with easter eggs) but what about scopes with pretty shapes? Omri’s scope looks like confetti and psychedelic flowers. I need three of them.

edit to add that the Nysthi scope can look like a My Bloody Valentine video from 1991. Sometimes (heh) I use three of those as well. I need more processor power so that I can actually make noises.

3 Likes

I know you are joking, but if you have a tolerable GPU there’s not reason an eye candy panel should bork your CPU. It should all be running on the UI thread, so it should only be fighting with other graphics for screen time. Shouldn’t make pop and click. Unless you have a really bad GPU that is sending the CPU into thermal limiting a lot.

1 Like