2 patches lately that sound awful using ASIO

As it should be! Thx.

The menu “Engine → Threads” in Rack.

It’s been re-hashed in countless threads (:slight_smile: ) in this forum. Bottomline:

  • More threads means a bigger CPU budget for patches, i.e. you can run larger patches.
  • Threads have a CPU overhead in and of themselves, so adding threads is not free at all. Only add them when you really need them, one at a time, until it solves your problem.

5.5% / 9 = 0.6% per instance of Basal, given it is receiving 9 poly channels. Not too shabby

1 Like

Yeah, doesn’t seem excessive to me.

Yes, I know how to find and change the threads, I just don’t know what a thread refers to and therefore it doesn’t help (me understand) explaining what more or fewer threads does.

Processes and Threads - Win32 apps | Microsoft Docs.

Pushing the Limits of Windows: Processes and Threads - Microsoft Tech Community

Thread (computing) - Wikipedia

1 Like

More threads means a bigger CPU budget for patches, i.e. you can run larger patches.

Excellent, thanks. Edit: I had expected threads to be a VCV thing as opposed to Windows - I haven’t seen the issue discussed in relation to any other VST or DAW.

I don’t remember other pro audio software where i can limit the number of thread/cores on windows.

DAW’s usually have an option to enable/disable multicore - if disabled they will use only one thread - if enabled, they typically use (number of cores - 1) (AFAIK)

NI Kontakt can limit the number of cores it uses.

Maybe it’s too complicated for most users to configure and/or windows scheduler does a good enough job.

Here’s some reading on the Windows Multimedia Class Scheduler - I don’t know if Rackv2 uses it.

When comparing to other audio apps, remember to compare with other apps that process one sample at a time, rather than a buffer full like something like a Ableton would do.

Why ?

I’m not entering Ableton and VCV in a contest.

But they often coexist - and it’s very relevant to be able to limit the number of cores for each app - perhaps even lock their threads to marked cores - swapping threads in and out of cores is expensive - like a million cpu cycles i read somewhere. It seems to be a tendency that the DAW will use worker threads like they own the CPU resources for themselves.

Used to do support at Silicon Graphics - Irix had something called “GRIO guaranteed rate i/o” - and cpu-sets to partition the system. It was not easy to configure - and only 1 in 1000 customers used it.

(I am not a programmer)

1 Like

Because that’s what you were doing. Comparing vcv to some software that’s quite a bit different.

I get that you are a smart guy. Don’t need to convince me.

I appreciate you trying to help me educate myself - teach a man to fish and all that - but this is pretty much over my head :wink: sooo I probably won’t venture into that article but thanks. Have enough on my plate learning the basics here.

So how was cvly able to run this patch and why would they submit one that would appear to be problematic if it draws so much CPU?

And here I am a bit later and I have a patch running- barely breaking 40% max load, and I decide I’d like to jam along with that in another program. Well that didn’t work out so well - tried Kontakt 6 and some of the lighter VSTs but started crackiling up pretty quickly. Tried using VCV in Reaper (and Ableton) while using another vst and same issue of course. I have what I thought was a pretty good computer: Intel Xeon E5-2670 with 48GB RAM - 64bit x64 NVIDIA Quadro M4000 - graphics card - Quadro M4000 - 8 GB Windows 10 Pro, 1TB SSD & dual 2TB HHD. What needs to be upgraded where crackles and glitches aren’t an issue, or am I looking at this wrong?

It’s a bit of a mystery. A decent place to start is the manual - all the common tips are all in one section: VCV Manual - Frequently Asked Questions

single thread rating for your cpu: 1707
PassMark - Intel Xeon E5-2670 v3 @ 2.30GHz - Price performance comparison

fastest single thread rating : 4222
PassMark - Intel Core i9-12900KF - Price performance comparison

Yeah…frustrating stuff…

Achieving this kind of modular realtime signalprocessing on a ‘generic’ (non optimized/dedicated) platform (home computer) is quite a challenge. Reaching the limits real soon. It was not even remotely possible not so long ago.

In Logistics there is Eli Goldratt’s Theory of Constraints (TOC).

It generally states that in any system/process you will find a bottleneck that controlls the overall throughput/quality.

So…improving the performance of the bottleneck (= eliminating the constraint) will increase the overall throughput/quality…until you run into the next limiting factor (= next constraint to eliminate).

Repeat this process of finding and eliminating the bottleneck/constraint until you are satisfied with the overall throughput/quality…

Actually ASIO was is example of elimininating performance constraints by bypassing a chain of abstraction layers and instead connecting the ‘source’ ‘directly’ and ‘exclusively’ to the ‘output’.

But…

With software solutions, in the end…we run into the limits of the hardware that is executing/supporting the software. The remaining options then are: change hardware (components).

In your specific case…the CPU seems to be the main constraint…

Alas, generally the CPU is (pretty often) a component you can’t simply (ex)change (e.g. replace your Intel Xeon E5 with an Intel Core i9).

In laptops there are often other components you just can’t (ex)change (due to space, powerconsumption, heat-production and other constraints).

Since any problem is in the end about perception (what you think is the quailty you want/need). So…you can simply change your perception to solve/eliminate some problems. So, simply redefining some of your ‘problems’ to ‘no problem’.

That ‘eredefining’ bit mostly about setting priorities. So you achieve your main goal: run more complex patches in VCV, at the cost of some other less important goals/qualities.

E.g. optimize your Windows and machine config for running VCV, at the cost of other uses. Simple examples are to run VCV at the highest priority (as exclusive as possible), switch of anything you don’t need for VCV (virusscanners, network and such) lower your bitrate, bitdepth, framerate and such.

Lastly, some other factors you can control…

  • Not all modules are created equally. Some use considerably more resources to achieve the same (or very similar) goals/tasks. Choose your modules wisely.
  • Also…there are many ways to solve the same problem. Generally the simplest solution will be the most efficient and will require the least resources.
  • Dedicated solutions often outperform ‘swiss army knives’.
  • Quality often comes at a cost. But often, approximations might be ‘good enough’, but much more efficient. Better can be the enemy of good.

Final general tip:

Don’t create Rube Goldberg machines (complex machines that perform a simple task).

The Page Turner | Rube Goldberg | Joseph’s Machines

6 Likes

Lol Good Saturday morning to you kwurqx and thanks for the smile. That was a fun video actually, with the pièce de rÊ¡sis¡tance being the tune that pops up at the end. Wonderful!

Yes, all this is very true. The biggest variable I’ve found is modules that do the same thing as another one but just use a ton more CPU. Back in the day I used to take popular modules and “fix” them so they used less CPU. My “functional vco-1” was Fundamental VCO 1 (0.6 version) made 4X more efficient. EV-3 was three Even VCOs in one module that uses less CPU than a single One. Mixer-8 started as a the AS mixer made 10X more efficient.

It’s not THAT hard to fix some of these problems. And I’m not talking about the Vult modules - afaik those take more CPU because they are doing a lot of stuff that other modules don’t, and sound uniquely good because of it.

I wrote this paper years ago with some super basic tips for making VCV modules that don’t waste CPU. Unfortunately (in my mind) no-one “reviews” VCV modules, so a user has to really have a passion for efficiency to know how to measure modules themselves.

2 Likes

Same goes for the (VST/AU) plugin world (both free and payed).

On the plus side, open eco systems (source code, standards/frameworks, community involvement, marketplace and such) and low or no prices result in low thresholds and enables a lot of creativity. On the downside, lots of overlap in functionality and a huge spectrum of quality differences.

It would be a momentous task to set up some sort of review/redaction/intake process. Maybe that providing some automated review/benchmark tools could provide a partial solution for this. Since all module ultimately rely on a common framework, this might be possible.

Anyway…

I guess in the end, we will all benefit from the net result of a low threshhold/community approach. It just makes picking the right tool for some job at any point in time a bit more complicated…

1 Like