As it should be! Thx.
The menu âEngine â Threadsâ in Rack.
Itâs been re-hashed in countless threads ( ) in this forum. Bottomline:
- More threads means a bigger CPU budget for patches, i.e. you can run larger patches.
- Threads have a CPU overhead in and of themselves, so adding threads is not free at all. Only add them when you really need them, one at a time, until it solves your problem.
5.5% / 9 = 0.6% per instance of Basal, given it is receiving 9 poly channels. Not too shabby
Yeah, doesnât seem excessive to me.
Yes, I know how to find and change the threads, I just donât know what a thread refers to and therefore it doesnât help (me understand) explaining what more or fewer threads does.
Processes and Threads - Win32 apps | Microsoft Docs.
Pushing the Limits of Windows: Processes and Threads - Microsoft Tech Community
Thread (computing) - Wikipedia
More threads means a bigger CPU budget for patches, i.e. you can run larger patches.
Excellent, thanks. Edit: I had expected threads to be a VCV thing as opposed to Windows - I havenât seen the issue discussed in relation to any other VST or DAW.
I donât remember other pro audio software where i can limit the number of thread/cores on windows.
DAWâs usually have an option to enable/disable multicore - if disabled they will use only one thread - if enabled, they typically use (number of cores - 1) (AFAIK)
NI Kontakt can limit the number of cores it uses.
Maybe itâs too complicated for most users to configure and/or windows scheduler does a good enough job.
Hereâs some reading on the Windows Multimedia Class Scheduler - I donât know if Rackv2 uses it.
When comparing to other audio apps, remember to compare with other apps that process one sample at a time, rather than a buffer full like something like a Ableton would do.
Why ?
Iâm not entering Ableton and VCV in a contest.
But they often coexist - and itâs very relevant to be able to limit the number of cores for each app - perhaps even lock their threads to marked cores - swapping threads in and out of cores is expensive - like a million cpu cycles i read somewhere. It seems to be a tendency that the DAW will use worker threads like they own the CPU resources for themselves.
Used to do support at Silicon Graphics - Irix had something called âGRIO guaranteed rate i/oâ - and cpu-sets to partition the system. It was not easy to configure - and only 1 in 1000 customers used it.
(I am not a programmer)
Because thatâs what you were doing. Comparing vcv to some software thatâs quite a bit different.
I get that you are a smart guy. Donât need to convince me.
I appreciate you trying to help me educate myself - teach a man to fish and all that - but this is pretty much over my head sooo I probably wonât venture into that article but thanks. Have enough on my plate learning the basics here.
So how was cvly able to run this patch and why would they submit one that would appear to be problematic if it draws so much CPU?
And here I am a bit later and I have a patch running- barely breaking 40% max load, and I decide Iâd like to jam along with that in another program. Well that didnât work out so well - tried Kontakt 6 and some of the lighter VSTs but started crackiling up pretty quickly. Tried using VCV in Reaper (and Ableton) while using another vst and same issue of course. I have what I thought was a pretty good computer: Intel Xeon E5-2670 with 48GB RAM - 64bit x64 NVIDIA Quadro M4000 - graphics card - Quadro M4000 - 8 GB Windows 10 Pro, 1TB SSD & dual 2TB HHD. What needs to be upgraded where crackles and glitches arenât an issue, or am I looking at this wrong?
Itâs a bit of a mystery. A decent place to start is the manual - all the common tips are all in one section: VCV Manual - Frequently Asked Questions
single thread rating for your cpu: 1707
PassMark - Intel Xeon E5-2670 v3 @ 2.30GHz - Price performance comparison
fastest single thread rating : 4222
PassMark - Intel Core i9-12900KF - Price performance comparison
YeahâŚfrustrating stuffâŚ
Achieving this kind of modular realtime signalprocessing on a âgenericâ (non optimized/dedicated) platform (home computer) is quite a challenge. Reaching the limits real soon. It was not even remotely possible not so long ago.
In Logistics there is Eli Goldrattâs Theory of Constraints (TOC).
It generally states that in any system/process you will find a bottleneck that controlls the overall throughput/quality.
SoâŚimproving the performance of the bottleneck (= eliminating the constraint) will increase the overall throughput/qualityâŚuntil you run into the next limiting factor (= next constraint to eliminate).
Repeat this process of finding and eliminating the bottleneck/constraint until you are satisfied with the overall throughput/qualityâŚ
Actually ASIO was is example of elimininating performance constraints by bypassing a chain of abstraction layers and instead connecting the âsourceâ âdirectlyâ and âexclusivelyâ to the âoutputâ.
ButâŚ
With software solutions, in the endâŚwe run into the limits of the hardware that is executing/supporting the software. The remaining options then are: change hardware (components).
In your specific caseâŚthe CPU seems to be the main constraintâŚ
Alas, generally the CPU is (pretty often) a component you canât simply (ex)change (e.g. replace your Intel Xeon E5 with an Intel Core i9).
In laptops there are often other components you just canât (ex)change (due to space, powerconsumption, heat-production and other constraints).
Since any problem is in the end about perception (what you think is the quailty you want/need). SoâŚyou can simply change your perception to solve/eliminate some problems. So, simply redefining some of your âproblemsâ to âno problemâ.
That âeredefiningâ bit mostly about setting priorities. So you achieve your main goal: run more complex patches in VCV, at the cost of some other less important goals/qualities.
E.g. optimize your Windows and machine config for running VCV, at the cost of other uses. Simple examples are to run VCV at the highest priority (as exclusive as possible), switch of anything you donât need for VCV (virusscanners, network and such) lower your bitrate, bitdepth, framerate and such.
Lastly, some other factors you can controlâŚ
- Not all modules are created equally. Some use considerably more resources to achieve the same (or very similar) goals/tasks. Choose your modules wisely.
- AlsoâŚthere are many ways to solve the same problem. Generally the simplest solution will be the most efficient and will require the least resources.
- Dedicated solutions often outperform âswiss army knivesâ.
- Quality often comes at a cost. But often, approximations might be âgood enoughâ, but much more efficient. Better can be the enemy of good.
Final general tip:
Donât create Rube Goldberg machines (complex machines that perform a simple task).
The Page Turner | Rube Goldberg | Josephâs Machines
Lol Good Saturday morning to you kwurqx and thanks for the smile. That was a fun video actually, with the pièce de rÊ¡sis¡tance being the tune that pops up at the end. Wonderful!
Yes, all this is very true. The biggest variable Iâve found is modules that do the same thing as another one but just use a ton more CPU. Back in the day I used to take popular modules and âfixâ them so they used less CPU. My âfunctional vco-1â was Fundamental VCO 1 (0.6 version) made 4X more efficient. EV-3 was three Even VCOs in one module that uses less CPU than a single One. Mixer-8 started as a the AS mixer made 10X more efficient.
Itâs not THAT hard to fix some of these problems. And Iâm not talking about the Vult modules - afaik those take more CPU because they are doing a lot of stuff that other modules donât, and sound uniquely good because of it.
I wrote this paper years ago with some super basic tips for making VCV modules that donât waste CPU. Unfortunately (in my mind) no-one âreviewsâ VCV modules, so a user has to really have a passion for efficiency to know how to measure modules themselves.
Same goes for the (VST/AU) plugin world (both free and payed).
On the plus side, open eco systems (source code, standards/frameworks, community involvement, marketplace and such) and low or no prices result in low thresholds and enables a lot of creativity. On the downside, lots of overlap in functionality and a huge spectrum of quality differences.
It would be a momentous task to set up some sort of review/redaction/intake process. Maybe that providing some automated review/benchmark tools could provide a partial solution for this. Since all module ultimately rely on a common framework, this might be possible.
AnywayâŚ
I guess in the end, we will all benefit from the net result of a low threshhold/community approach. It just makes picking the right tool for some job at any point in time a bit more complicatedâŚ