Crypto GPU?

I’m curious if running something in the vein of a headless Cryptocurrency Mining GPU (okay, maybe not the way they go about it) might work for a (Win or Linux) desktop install of Rack?

I ask because with the crypto market gone to shit, these things should be cheap (they said <$70USD)…

Rack isn’t particularly demanding, so it would probably work on any card you can find that runs I think GL 3.2? (Although the splash for that video says “no output??”… you would need one with an output to your monitor obviously…)

Most of the GPU use is because (unless you’re using @JimT’s experimental branch) it renders more frames per second than are actually needed most of the time.

… Unless you mean using the GPU to run the modules themselves. Which would be weird (and none of them support that.)

Umm… No? (re: most things you’ve said there)

As long as you’re able to run multiple GPUs simultaneously (I’ve done it in both Win and Lin), or figure out a pass-through like they did to the mobo output, you don’t -need- the output on a second graphics card (hence why I asked about running one headless.

Rack “not particularly demanding”? I don’t know what rig you’re running or what rock you’ve been living under (maybe you’ve not been on the official FB group?), but when a good number of users struggle with stuttering audio, it’s demanding enough.

@vortico has also said elsewhere on the forum here that he only looks for VSync, not a specific FPS, so unless I misunderstood, your (@skrylar) claim is spurious.

On top of that, I can swear it’s been mentioned before (and could be backed up by Resource Monitor or Taskman) that the GPU is invoked by Rack to help run what is essentially analog emulation.

In the end, I’m just curious whether Rack could benefit from offloading some of the calcs to a relatively budget-friendly powerful GPU, in a manner similar to what they did there…

So in the end you just chained it to another device that does have the output. Pedantry.

Literally compatible statements.

A gaming pc on Windows 10; dedicated GPU and no stutter. A Lenovo 2-in-1 running Lin, that is only using the intel integrated for video (the discrete card isn’t powered or configured) and not a stutter in sight.

And how many of those users are using macs, a platform declared openly hostile to OpenGL use?

When I throttled the framerate by manually patching the code, CPU use dropped (because that computer shares the CPU with rendering.) And I ran everything under sysprof. ~14% of the system’s run time was spent in the opengl driver with the integrated intel renderer. I heavily suspect between a Metal renderer (if only to make Apple happy) and an FPS limit would make the complaints go away. (Which would require either maintaining two renderers, or stacking on top of BGFX to make use of their dual GL/Metal offering.)

But no, I don’t use Facebook.

Great find @Patman!
Might be worth looking into (only seems to work in this case if you have onboard graphics as well though)

@Skrylar, almost missed your last comment … hmm, you do know what you are taling about … :wink:

1 Like

Limiting the fps definitely has a pretty massive effect on MacBookPros with dedicated GPUs, so it is not simply about integrated GPUs.

Andrew thinks that batching the OpenGL calls will fix the problem (rather than limiting the framerate, which he says is treating the symptom not fixing the problem).

I don’t doubt that you are right that Apple’s deprecation of OpenGL and failure to maintain their implementation since Metal is part of the story here though, and something that will probably become more problematic in the future.

Strongy agree with the treating the symptom rather than the problem thing. At worst, high graphics use should eat more cpu, but it shouldn’t affect sound production the way it does.

This is false, unless you’re talking about a plugin which uses CUDA/OpenCL. To my knowledge that has not been done yet.

Rack/nanovg doesn’t use complex shaders and extensions, so only a small subset of OpenGL is used. This subset of the OpenGL or Metal API can thus be thought of as a lightweight wrapper to simple GPU features, so performance is virtually equivalent as long as Apple continues to include libGL in MacOS. The performance difference occurs when you compare very modern extensions, like advanced texture mapping onto 3D objects and obscure GLSL / Metal Shading Language features.

In other words, Rack is telling the GPU to “draw a blue triangle”, while Red Dead Redemption 2 is telling the GPU to “apply this specular map, bump map, and color map to this vertex buffer using a custom UV map written in a shader language that was JIT compiled to GPU assembly and then write the depth buffer so fog can be applied.” The latter procedure can take millions of lines of driver code to pull off, so there’s lots of room for optimization, whereas there’s not much either API can do to remove the overhead for drawing a blue triangle, so OpenGL and Metal have virtually the same result for that case. So please do not be scared that OpenGL is used. The difference you have been told about is due to effective Apple marketing tactics, as well as benchmarks of completely unrelated applications to Rack.

4 Likes

Thanks for clarifying the OpenGL/Apple situation Andrew. We should be OK for a good while yet then !

I must have misunderstood.

And on the topic of whether Apple will remove libGL from MacOS or for particular graphics drivers, this move would instantly disable probably 10-50% of video games on Mac, so it would be a very aggressive move, even for Apple’s standards.

2 Likes

Let’s just hope they don’t appoint somebody from Steinberg to the graphics department then :wink:

3 Likes

In theory Metal could have a lower overhead for draw calls, because Apple has a tight grasp over the chips running it and the API itself. Some sources in console gaming have told me in the past that the per-call overhead is significantly lower on ex. a PS3 than it is on a PC, so they tend not to think about it as hard, and ultimately backporting to PC suffers because there are now more draw calls which suddenly cost ten times as much as they did.

It’s also possible that its true but only applies to their mobile chips for iOS. I don’t have modern macs to do benchmarks with but I have to think there is some reason they would break compatibility in an era where their developers can’t afford pointless changes anymore.

I wonder if there would be any benefit to using something like hawktracer to try and instrument how much time nanoVG is stealing every time it flattens out curves that don’t change and how bad the difference is in draw calls between Intel and Nvidia.

If the rumour mill is to be believed, Apple will begin replacing Intel processors with ARM in 2020.