haha - automatic delay compensation. Most DAWs have it, and it’s super difficult to get right. Have fun working on that as a free module ![]()
yeah, when you put it like that…
The difference I guess being that it attempts to compensate only for cable delays propagating through the module graph (not for, say, FFT performed by the module), but, yeah, a non-trivial problem to say the least!
EDIT: thought about this a bit (uh oh):
- Consider a scene in Rack as a digraph with modules as nodes, cables as edges, and inter-module delay as constant, positive edge weights (n=1). (Let’s assume we’re not using Little Utils Teleport or equivalent).
- This structure is highly amenable to well-understood graph algorithms. For example, the shortest weighted path between any two modules, if a path exists, can easily and efficiently be found with Dijkstra or a number of other simple graph algorithms, even when there are cycles (since the weights are always positive) and it won’t need recomputation unless there’s a cable operation (add, remove, repath).
One minimal, but useful, function, which I’d suggest as a proof-of-concept, is this:
- Allow the user to define a source module (A) and a final module (B).
- For each port on B, identify if there is a path from A to B that ends at the given port and determine its shortest weighted length. (In the graph formalism, we basically loop through the ports, temporarily delete all cables other than the one for the port, and then run Dijkstra or alternative).
- If there are two or more such ports, set port delays for each such port on B so that the sum of the path length plus the port delay for each connected port equals the longest path length minus one. So if there’s a direct connection from A to port B1, a two-hop connection from A to B2, and a five-hop connection from A to B3, set the additional delays as 4 on B1, 1 on B2, and 0 on B3.
To what end? Well, if A is an oscillator and B is a mixer, this would allow on-demand, automatic syncing up of parallel audio paths from a common source without comb filtering in cases where the parallel paths go through different numbers of modules. (Or imagine that A is a mid-side splitter and B is a mid-side combiner; I guess that’s just a specialized form of mixing, but now you don’t have to count modules in order to ensure that your different mid/side processing chains don’t get out of sync). Note that we’re not trying to eliminate or even minimize sync differences along the paths from A to B; we only care about it at the end.
The port delay adjustment be re-run manually, or possibly re-run automatically when there’s a cable change (since by definition the outcome can’t change unless there’s a cable change).
I think that would be a useful advanced feature. The adjustment algorithm could be added via module, but despite Rack’s flexibility I’m pretty sure that a module couldn’t add the per-port delays (maybe there’s some terrifying hack, but a custom build would seem to be the way to go here).
I don’t know how many more use cases there are here, or whether more advanced approaches would need to be used to deal with them. For me, the (probably impossible) hope has always been to reduce cable delay to zero EXCEPT when a single-sample delay is necessary to avoid feedback. But why? No one cares about start-to-end latency of a few samples; it’s [probably] only an issue when signals get out of sync and are then re-combined, and what I really like about @DaveVenom’s suggestion of a variable port delay is that you can compensate for that at the point of recombination without it changing the visible structure of your patch (and, if the tool-assisted graph theory trickery pans out, maybe even without manually counting and updating).
so yeah, that’s exactly what automatic delay compensation in a DAW does. It’s a good feature. The work is as you describe. Yes, it’s “basic” graph theory. Go for it! (btw, I think you need to break loops somewhat arbitrarily?)
Anyway, backing up, I think delay build into the port would be a cool feature. Thanks @DaveVenom .
One critical aspect that breaks graph/node processing latency compensation are feedback loops. Do you have any ideas on how that could be made to work?
For example: module A → B → C → A, which is not uncommon in modular setups. How do you decide which path takes priority? In feedback loops there is no possible way to properly compensate for the latency introduced by the loop, unless the API itself has some smart way to run some modules twice or something like that. I have not seen a single graph/node based API that has such functionality, if you know of any please do tell.
Some graph processing code even completely forbid the use of feedback loops, as it is the case of the JUCE AudioProcessorGraph class.
perhaps you didn’t notice I pointed out the loop issue in the previous comment.
ah yeah, you did not mention it by name so I missed it.
what “name” are you referring to?
feedback loop. that is the term I see always used for such things.
related discussion: AudioProcessorGraph with Feedback loops - General JUCE discussion - JUCE
ah, ok. Of course in graph theory they are not called that: Loop (graph theory) - Wikipedia
Pedantically, in graph theory terms, I think we’re talking cycles, not just loops; a loop typically means an edge from a node to itself (like a self-patched module), but I think the issues are the same for @falkTX’s original A->B->C->A suggestion, which is a directed cycle (although in audio we’d typically call them both feedback loops, so…)
Anyway, given that cycles are the whole reason for the cable sample delay, we’d better be prepared to handle them!
My initial thought is this: we actually want what the basic algorithms will provide. Dijkstra’s algorithm doesn’t get caught in cycles–it finds the shortest weighted path, which by definition (for a graph with positive edge weights) can’t involve cycles. I think this is the behaviour we would typically desire for delay compensation.
Imagine this (I don’t have Rack on this machine or I’d work up a patch, so please forgive the text-heaviness):
S is our source oscillator. O is our output mixer. We want to time-align signals from S to O by different paths.
S goes straight to mixer channel 1 of O (dry path), but it also goes into a fancy feedbacking resonator that has three modules: A->B->C. B goes to channel 2 of O (wet path), but it also goes to C (probably an attenuator), which returns to A, forming the feedback loop/cycle.
I think that what we want is to align the straight path, S->O, with the direct path S->A->B->O. Pretend we generate a single-sample impulse from S. It reaches O with delay 1 via S->O; it reaches O with delay 3 via S->A->B->O; then via the cycle, it also reaches O with delay 6 via S->A->B->C->A->B->O, 9 via S->A->B->C->A->B->C->A->B->O, and so forth. The cycled signals simply can’t be time-aligned with each other (true in a physical system, too, even though the signals are propagating at much higher effective speeds!)
So we time-align using the first sample that escapes the cycle (having never gone through it), which, again, perfectly matches the shortest-path definition in a graph and is what Dijkstra et al. will give us. If the result of that algorithm doesn’t sound right, the next step would be to adjust the port delay manually.
Good news - I got a response with a decision about the enhancement request.
Bad news - the request was turned down ![]()
I am definitely disappointed, but at least the request did not disappear into the void - I appreciated getting an answer one way or the other.
Considering that in a typical patch maybe 1% or 2% of all input ports will use the delay feature, it looks like an overhead. When I need a delay, I use a dedicated module.
I am working on it with the Local Authorities. Call your Congressman and Senator.
Thanks for requesting and reporting back! I’m not surprised that @Vortico didn’t bite (in part due to @Ahornberg’s point, I’m guessing, and probably in part due to not wanting to make the GUI less explicit about what’s happening).
I’ll put the auto-syncing sample aligner I describe above onto my VCV ideas list for when I start coding in Rack again (November?) as there might be a (slightly clunky) way to squeeze it into a module after all.
digging up this old thing to leave this here:
I would highly appreciate a way to see how much delay a chain is subjected to. from what I gathered each cable adds 1 sample of delay, I usually run into issues when combining gate and v/oct cables, as I like to process them a lot individually, the way I understand it, each module adds another 1 sample delay in any chain. so what I need to do for complex patches is to visually follow the wires on every module, count them up, calculate any differences with any parallel chain, and place a sample delay module in the faster chain to sync the signals up. what would make this process tons easier, is if I could just hit F3 for cpu load mode, hover over any given module’s gate/cv pairs, and get feedback how much latency is currenly involved into any given input. this should be rather trivial to implement. I just hover over the inputs, see the difference right away, get my latency delay module and fix it, hover over again, see they have same sample delay figures now, and I’m fine. I don’t even need automatic delay compensation, maybe in 2028 but not today.
maybe a sync checker module would be an easy interim solution. simple module with say 8 inputs, and a little display next to them, you feed them all the triggers or gates or even cv signals, it calculates and compares sample delays between those inputs, for any noticeable change. so I could play a keyboard into it, get some sync triggers from the midi module as well, and see if everything really lines up. so essentially a signal sync checker module idea, anyone up for coding one up?
Heh, now that this got dug up, I’ll just mention (and +1) the notion that optimally there would be positive delay and negative delay adjustable by the user like this. In other words, just set how many samples later or earlier you wish a particular signal to be. When setting something to being earlier, this naturally involves delaying everything else behind the scenes, for the required amount - but the total net effect of that is, the one signal you are adjusting just appears to be coming in, at n samples earlier.
This has been routine operation in software like Ableton Live, for example. You can simply set a positive or negative shift in samples, for any channel inside the project.
This is no longer true.
So this is no longer an option.
Anyway.
In modular, chains of various lengths can converge into a single port. Either directly (multiple cables into a single input port, summing the input) or via some mixing/summing process. Either forward or backward (feedback). And there is also polyphony to consider.
Having said that, it might be quite a challenge to keep all that in sync. But…why and when would we actually need to sync?
Basisically the each sample delay is a phase shift. Not without consequenses. Fatal in some use cases, but more or less irrelevant in other (most?).
Let’s call these the fatal use cases?
- any use case where there is a converging point where timing/syncing (especially of binary edges/transitions) is important, like converging/mixing/summing of triggers/gates where the mix/sum at a specific point in time is critical for whatever proces follows. E.g. somekind of LOGIC might fail due to timing/sync/phase differences.
- any tuned feedback loop, since each sample delay introduced into the loop will effectively lengthen the delayline/loop and thus effectively lower the frequency of the delayline/loop (since the overall processesing/sample rate is constant).
Sample delays introduced in signal paths of various lengths that converge further downstream will always introduce relative phase shifts (and thus comb filter effects). But generally this is a non-issue in most ‘vanilla’ use cases, like when mixing/merging various audio signals, originating from signal paths of different lengths.
In a practical sens, I guess the whole sample delay thing is just something we need to circumnavigate wherever it is relevant in a specific use case. I don’t see a practical way where all the permutations of a complex network could be synced automatically (or even manually). Networks might include dynamic routing/switching, so the lengths of signal paths might vary per samplerate cycle.
VCV Rack operates sequentially in discrete steps at samplerate. And there is limited parallel processing. Alas, we are not in a quantum compute world where many things can happen in parrallel and in (practically) no time at all.
A similar/derived thing is the fact that in VCV Rack we have separate input and output ports (with effectively 1 samplerate cycle of processing in between).
This is no longer true. So this is no longer an option. Anyway.
Further down this thread the thought was entertained that the sample delay would work on a cable level, not input level. I.e. you could adjust the delay of a specific signal in samples (or, timing of a specific signal in samples, either earlier or later, as I mused above) per a particular cable.
I saw that too. Sample delay options per cable channel (polyphony) might solve for some use cases. Probably some more then at input port level.
But…as said, signal path lengths (and the number of cables involved) in a dynamically switched network might change at runtime (potentially every samplerate cycle). So, networks might be fully synced at some point, and (partly) out of sync at others. Especially challenging in feedback scenarios.
Also, might be challenging to keep track of what happens in a network/patch, when any channel of any cable in the network/patch might hide/have one or more samples delay.
Static or dynamic sample or precise time delays can off course already be introduced:
- add an extra cable + forward copying module (e.g. a multiple)
- add a sample delay (multiple module options)
- add a precision delay (not many module options)
Sckitam WaveguideDelay offers both sample delay and precision/time delay (using a form of interpolation). The delaytime precision is precise and fast enough to even implement audiorate Phase Modulation on any incoming signal.
But…as said, signal path lengths (and the number of cables involved) in a dynamically switched network might change at runtime (potentially every samplerate cycle). So, networks might be fully synced at some point, and (partly) out of sync at others. Especially challenging in feedback scenarios.
Indeed. However, in scenarios like this, taking into account the “chaotic” nature of large feedback constructs, being able to set signal delays/timings would (in my experience) usually be a “flying by the seat of your pants” kind of deal, in any case. That is, you have an interesting construct going, and want to optimize, experiment, hone the way the signals interact by adjusting the timings and hearing the result, with all the resultant phase weirdness and such ripple effects contributing to the whole - the sound you are hearing.
In other words, there is nothing inherently preventing these kinds of controls in just the notion that, given a complex structure and signal flow, the effects can be unpredictable, and so on.
That is to say, in the end, it’s all about the fun of making sound ![]()
During one of my lectures at a particular university, I built a chaotic feedback network inside Ableton Live, as it allows feedback routing, and it had quite a bit of channels feeding into each other, multiple sources into single destinations, stuff like that. Live does let you adjust the timing as described above, and yet at the same time, you know that in cases like that, it’s more like “okay, let’s hear how this ends up being” ; this doesn’t mean that in a lot of simpler cases, control like that can yield also very exact and useful and predictable results.
So, the fact that controls like this wouldn’t be able to give predictable or even sensible
results in all possible topologies isn’t a reason not to have them at all. In my opinion, and all that.
(Realistically, I don’t expect sample timing controls being implemented into VCV cables
)