How polyphonic cables will work in Rack v1

Many Rack users have asked about the specifics of polyphony in Rack v1. Here’s my description of the exact behavior of poly cables and poly-supporting modules. Plugin developers should read this thread (or the Voltage Standards section of the manual after Rack v1 is released) before adding polyphony support to their modules.

Core MIDI-CV will have a “channels” mode in its context-menu that ranges from “mono” to 16. If a number N is selected (between 2-16), the CV and gate outputs will always carry N channels regardless of how many notes are currently held on your keyboard. If none are held, you’ll have N gate channels at 0V. If 3 are held, 3 of the N channels will be 10V. You can choose from a few polyphony modes to determine which 3 of the N channels are assigned the notes (Reset, Rotate, Reuse, Reassign, etc).

Fundamental VCO will accept poly cables in its 1V/oct input. When you patch a cable from MIDI-CV’s CV output into this input, the VCO will run N oscillator engines (which will likely consume significantly less CPU than N copies of Fundamental VCO), and its SIN, TRI, SAW, and SQR outputs will turn into N-channel outputs. Plugging any cable into a polyphonic output will turn it into an N-channel cable.

Fundamental VCF will accept polyphonic cables too, in its audio input and FREQ input (and RES and DRIVE). If an N-channel cable is patched into the audio input, the VCF will run N filter engines. If an N-channel cable is then patched into the FREQ input, each filter engine will adjust its frequency cutoff based on the associated channel in the FREQ input. This allows the VCF to be controlled by a polyphonic ADSR envelope generator using the polyphonic gates from Core MIDI-CV, to produce the famous behavior of analog polysynth keyboards. If, however, a monophonic cable is patched into the FREQ input, the same cutoff will be used across all N filter engines. Finally, if an M-channel cable is patched into FREQ that is short of its number of channels compared to the audio input (i.e. M<N), the first M filter engines will use their associated channel, and all higher-numbered channels will use channel 1 of the FREQ input. This is how all modules with multiple poly inputs should behave when a mix of different-channel cables are used.

The appearance of polyphonic cables will be rendered differently than normal (monophonic) cables. Currently we’ve decided to draw them thicker, perhaps with a number on them to display the number of channels they carry. The plug LED will be blue instead of green/red, representing the total electrical power carried by the cable. (The exact formula will be the RMS \sqrt{\sum_n V_n^2} if you’re interested.) A word of enlightenment: It is actually true that all Rack v1 cables will be polyphonic. What I’m calling monophonic/normal cables are actually polyphonic cables with 1 channel. They are implemented as one-and-the-same.

FAQ

What if you plug a poly cable, coming from a poly output, into a non-poly (mono) input? The mono input should simply use the first channel of the cable.

Can poly cables carry a collection of signals that are unrelated to polyphonic voices? Yes. Even though they are called “polyphonic cables”, they can be used for surround sound signals, multiplexing modules similar to the Doepfer A-180-9, or for allowing communication between modules and their expanders (A famous example is Make Noise Brains and Pressure Points).

Can you have a 0-channel cable? Yes, there are a few exceptional cases where this is desired, e.g when the number of channels is dynamic. However, it is required that the first channel must be set to some sane default, such as 0V, to ensure compatibility with mono inputs that the output might be connected to.

14 Likes

I wonder if it would be possible to automatically make every module polyphonic. For example, by connecting a polyphonic cable to a filter, Rack could instantiate multiple instances of the module (hidden or with no GUI) that share a single panel.

It’s possible, but I don’t want to do that. Performance will of course be 1-4x worse, and it will decrease plugin developers’ motivation to “polyphonize” their modules. So at the end of the day we’re left with nothing but a huge collection of really slow polyphonic modules.

I did mention this in the other thread earlier but it seems more apt to reiterate it here.

U-He have managed to provide both sonic and (relatively) CPU friendly performance with their virtual analog VA’s by providing an option for them to take advantage of multiple cores for each polyphonic voice.

Obviously as a long term Max, Reaktor and Softube Modular user I understand the difficulties of using a similar approach in an open modular environment. However it is worth considering the manner in which Urs Heckmann has architchted Diva, Bazille and Repro 5. The reason he’s able to provide multicore performance is that he treats each individual note of polyphony as a separate signal path. And all of this is achieved without compromise to the sonic characteristics he looks to emulate.

Another VA that has always amazed me is OP-X Pro (the Oberheim A/B emulation) as this was originally created with SynthEdit and features individual signal paths for each oscillator with individual filters and EG’s for each voice. Whilst this doesn’t match the sonic palette of U-He synths the OP-X developers manage to capture much of the character of Obie synths with their individual voice panning (which is far more useful than most chorus effects). And all of this is achieved with minimal processor overhead (when compared to U-He poly’s).

I’d imagine that in VCV that polyphony will be delivered both by individual EG/filter/VCA paths but probably more commonly by batching poly voices together for a single EG/filter/VCA to process (post mixer) as this will be more performant.

As an artist I would (quite selfishly!) prefer a limited number of VCV modules that make the most of multi-core processing capabilities rather than a ‘one size fits all solution’ that most developers can adopt with greater ease with regards to polyphonic capabilities.

Building a polyphonic analog is a challenge even in the realms of hardware (just ask Olivier of Mutable ref his post Shruti efforts before he made the move over to Eurorack modules). The reason I own four Shruti’s is that I occasionally daisy chain them for pseudo polyphonic duties - another case of different rather than better or worse!

Working on the assumption that it’s architecturally possible of utilising multiple cores for each individual note of polyphony in an open modular system, I believe that third party developers will end up creating better polyphony solutions than if their provided with a simpler more easily adopted solution with compromised sonic qualities.

If it was possible to create a VA with SynthEdit that utilised an individual signal path for each note of polyphony, surely it must be possible with a modern modular system.

If I’m barking up the wrong tree here please ignore my over-excited pleas…

Hi Andrew! Thanks for this explanation, a few questions if you have time to answer. Thanks in advance :slight_smile:

  • How will we know that an input or an output can handle polyphony?
  • will there be a module to combine voices to create a polyphonic patch to use other sources than midi (sequencer module) like 16in 1out?
  • will there be a module to explode polyphonic patch 1in 16out to make voice stereo panning à la OB-X?
  • will there be any way to tune each voices separately in the same module, to mimic analog unstability of the components ( fine tune vco, light cut off difference?
  • If I use monophonic cable into a ployphonic cable, please tell me we wont have all the virtual polyphonic engine runing for nothing in the background…?
  • does it mean we can one day hope to chain those mutable stage sequencer?

Sory that’s a lot of question :slight_smile:

It’s more efficient to process multiple voices each core rather than multiple modules each core, due to SIMD and cache locality, end of story.

U-he is able to do it the opposite way because they’re processing in blocks.

Good questions.

How will we know that an input or an output can handle polyphony?

You should assume a module doesn’t unless the module has a “Poly” tag in the Module Browser. If it does, most inputs/outputs will be poly within reason. Obviously, the RUN trigger input of a sequencer will be mono, because it would be ridiculous otherwise, and so on.

will there be a module to combine voices to create a polyphonic patch to use other sources than midi (sequencer module) like 16in 1out?

Yes, this will probably be called VCV Poly Muxer proposed in VCV Poly: ideas for a new standard polyphonic utilities plugin

will there be a module to explode polyphonic patch 1in 16out to make voice stereo panning à la OB-X?

VCV Poly Demuxer

will there be any way to tune each voices separately in the same module, to mimic analog unstability of the components ( fine tune vco, light cut off difference?

Fundamental VCO will do that in its default ANLG mode. I see no need for Fundamental VCF to do that with cutoff frequency since that effect is 100x more subtle than VCO pitch drift. Other than this built-in analog-modeling pitch drift, the only way you can create pitch differences between channels in VCOs is to offset the polyphonic 1V/oct signal yourself.

If I use monophonic cable into a ployphonic cable, please tell me we wont have all the virtual polyphonic engine runing for nothing in the background…?

Do you mean “monophonic cable into a polyphonic input”? There is no such thing as a poly input. The number of channels of an input is always equal to the number of channels of the output on the other end of the cable.

does it mean we can one day hope to chain those mutable stage sequencer?

I don’t see how poly cables has an effect on that feature.

Thank you for your super quick answer. Yes sorry I meant :

If I use monophonic cable into a ployphonic module, please tell me we wont have all the virtual polyphonic engine runing in the background…?

and for the mutable stage link, I was thinking to this feature:

So I have no idea how it works, but I guess we woudl need multiples information to go back and forward between the two Stages instences. maybe a poly cable could handle it?

Ok, that kind of makes sense but what does that mean with regard to multicore performance in general with VCV? Will it, like Reaktor and Softube Modular always pretty much be a single core beast? (I’m aware that there are individual third-party modules that have multicore capabilities but this isn’t much use unless it’s part of a holistic solution.

That’s not a loaded question as the plugin version of VCV will enable multiple instances of VCV to be loaded. And in all honesty, I’m not personally wed to polyphonic voicing when using modular techniques per sé, as I tend to use modular components to create chord voicings out of separate timbres, much like you’d do with a string and horn arrangement. And that’s completely doable with monophonic cables. I ask purely as I’m always interested in getting the most performance out of my hardware.

Ableton is very multicore friendly if you build your channels and sub-groups strategically, hence using a 40 core system with reasonable but compromised single core performance. I had hoped to use my Mac more for VCV as it has better single core performance but the GPU drivers (as ever with Apple) create more problems than the CPU performance. I use Vienna Ensemble Pro to stream midi/audio over the studio network between the Mac and PC workstations and utilizing the Mac for VCV would have been ideal. I’m thinking of getting an i9 box strictly for plugin’s that thrive under single core scenarios (coupled with a reasonable GPU, probably a mid-range nVidia). I’m just attempting to work out whether VCV is something I’ll be best utilizing via that i9, of whether there are any advantages to keeping it on my main workstation. Much as I can put everything everywhere in the virtual plugin domain, I tend to build workstations based on the programs best suited to each platform, e.g. my Mac is mainly used for sequencing hardware and plugins via Numerology which is an OS X only solution.

Apologies for waffling on, it’s just that my interest in multicore isn’t driven by the typical cry’s of ‘why are you only using 5% of my hardware’s capabilities!’, I have a modicum of understanding of the engineering challenge (else I wouldn’t have put up with the woeful performance of Max over the years!) and I was genuinely interested to know if you’d found a way of answering that engineering challange.

I must admit I disagree with this assertion. The reason that Oberheim synths have their very distinctive sonic signature has less to do with tuning drift and more to do with each filter path’s different response to the EG’s (and other modulation sources) driving them. Plus as I mentioned previously, the separate placement within the stereo field of each of these signal paths results in a rich voice interaction that delivers a wonderfully rich natural chorus to the voicing. It’s more than this alone, in that the State Variable Filter designs like those found in the SEM (most Oberheim filters derive from that same design) benefit greatly from individual tuning of each individual filter.

I’m not much of a fan of Arturia emulations but the Matrix and SEM based emulations are head and shoulders above the rest because of that ability to tweak the response the of individual filters affecting each individual polyphonic voice.

Okay, I understand. A poly module is one that supports a variable number of channels, not a fixed number like 16. So if you use monophonic cables, that number will be 1, so a poly module will use 1 engine.

At least that’s how it should be done. A plugin developer could write a module where all 16 engines run all the time, but they would likely be too disappointed by their module’s performance to release it this way. It’s usually trivial to dynamically process a variable number of engines, so they would likely change it and then release.

It’s not a question of what kind of cables are needed but whether it’s a good idea to put two new ports on the panel. And no, it’s not a good idea to abuse Mutable panels like that.

1 Like

No, a multithreaded engine will be written for Rack v1.

That’s because it processes in blocks/buffers (usually size 64-1024). If Rack used blocks, you couldn’t create feedback patches, which is in probably 90% of people’s patches.

1 Like

So you’re saying I should add frequency drift to Fundamental VCF, because it will make a significant number of people think some percent higher of Fundamental’s sound? What sort of cutoff frequency variation (in % or Hz) are you claiming is “very distinctive”?

Please don’t add pitch drift, of if you do, make it an option! Pitch drift could easily create issues down the line as it get’s amplifed and garbled by other things in the chain.

Can you elaborate on “issues down the line” in detail?

You could not be more right about that! Maybe rack 3.0 will let us access the back panel of the modules to let us do some virtual soldering work :smiley:

1 Like

Apologies I’m at the airport so I can only answer this briefly by directing you to a great Sound on Sound piece from their Synth Secrets series; which goes into detail about the engineering behind the original Oberheim 4-Voice. This was Tom Oberheim’s original design, which was effectively 4 SEM’s chunked together as a single unit. It was only when digital control components became feasible that this original design became more stable, but much of what Oberheim’s are known for with regards to sonic signiture is based on this unique architechture.

https://www.soundonsound.com/techniques/polyphony-digital-synths

I can provide other reference material tomorrow (I’m based in the UK, so please exuse any lags in communication).

I’m not suggesting that the Fundemental VCF follows this architecture, the part of your statement that I disagreed with was that individual VCF tuning is 100x more subtle than Oscilator drift. But I do hope that the polyphonic architecture within VCV allows those third party developers that wish explore the polophony approach of Oberheims and Prophets are able to do so in an optimal manner.

Not everthing in the world of hardware analogue sysnthesis is pseudoscience that only exists to infate the prices of vintage gear. :wink:

I mentioned Ableton, not as a comparison to VCV, but to explain why I use a 40 core XEON for my main workstation, rather than something with faster single core performance.

My reasoning is that is you have a variance/std dev/range/whatever of a small amount of frequency, say +/-1 Hz, of an oscillator, some percentage of people will feel a certain difference when you play some chords. My intuition is that the same percentage of people would feel the same difference of sound with perhaps as much as +/-100Hz range of filter cutoff frequencies. Idk, worth some A/B tests.

Yes, a developer could make a module which applies a CV-controllable random drift to all channels of a polyphonic cable. That’s the power of polyphony in Rack. The possibilities are as endless as the possibilities of monophonic modules.

Ref the SOS article I linked to, it’s worth reading the whole thing (and relevent to the points we’re discussing) but the TLDR version is the grey boxed section at the foot of the page titled Random Voice Assignment.

From my perspective as an artist/user, that’s what makes VCV such a compelling proposition. I think it’s right that the mix of Fundamental and third-party modules should strike a balance between processing cost and ‘artistically focused design choices’ that deliver a richer variety of sonic signatures. This is something that’s successfully delivered with current monophonic signal paths, so it’s great to know that it will be a continued focus with polyphonic signal paths. The only caveat to that is that the VCV core should wherever possible provide an architecture that helps facilitate rich polyphonic signal paths with maximum efficiency. It’s all too easy to go down the SEM 4-Voice route of effectively building a monophonic signal path per voice but you’ll soon screw the pooch in terms of processing requirement (and that’s even when utilizing the fastest single core performances currently available).