Thoughts on using AVX extensions

Hi All and @Vortico,

Just a quick question here: I see we use float_4 (SSE extensions) as a standard on VCV, and by reading on previous topics I understood that this seems to be in order to level with processor extensions that most likely everyone has.

I understood also that VCV won’t support these extensions natively (at the SDK) any time soon probably.

However I was asking myself a question yesterday:

If I write a plugin that detects if the processor has AVX or AVX2 extensions at time of execution, and use those extensions instead for processing, would it create any problems? I didn’t try it yet, and am yet to verify the feasibility, but just wanted to ask first as maybe there’s some well known “big no no” or maybe someone has tried this already and didn’t see any advantage. So I wouldn’t go down that road in this case.

Thanks, Marcelo.

I think @Vortico has said that if you do your own detection you are free to use AVX.

Cool … so I guess I will do some experiments then.

Here’s a post from Andrew that may be relevant to the discussion:

If you want to write your own manual AVX code, you can use GCC function dispatching with

__attribute__((target ("default")))
void foo() {...}

__attribute__((target ("avx")))
void foo() {...}

You don’t need target_clones unless you want to automatically generate function targets from a single function. If you want to manually use float_8, you’d want to write multiple function (or method) targets manually. Of course, float_8 doesn’t exist yet so I’d first have to port all the functionality from float_4 to AVX.

1 Like

I was just reading about this coincidentally.

For now I am just experimenting with it, just to see if I can bring any additional gain to the Model series.

Do you have any plans to implement such on, let’s say, V2?

Thanks, Marcelo.

1 Like

If someone wants to port ssemathfun.h to use a template type T (assumed to be a simd::vector type) instead of __m128, I could add float_8 a few weeks after Rack v2 is released. Someone said they’d do that, but I don’t remember what happened to that conversation.

Damn I wish I could help with that, but I am still trying to wrap my head around it. Took me a couple months to understand simd/float_4, lol.