Module similar to Valhalla Shimmer?

Yeah, well, it’s an FFT → iFFT approach. Not simply sample/hold/delay/stutter based (though you can use that approach to emulate timestretch somewhat.

Paul Nasca created several other great tools/synths.

Paul Nasca has also developed the PADSynth algorithm (e.g. as implemented in his classic ZynAddSubFX soft synth).

PADSynth is another great iFFT (Harmonic Series) based tool to create, well, Pads. In VCV you’ll find great implementations by DocB.

2 Likes

For the layman (me): how would Paulstretch even work in realtime?

Check out Norns Glaciers - not realtime, but runs on samples/four buffers. Live recording was introduced in an update.

There’s PaulXstretch too - a non-realtime stretching VST. https://sonosaurus.com/paulxstretch/

2 Likes
1 Like

Well…to also get a bit into the sample based options for Time stretching…and Time Stretching in general…

Sound could be broadly defined as some Frequency Spectrum, static or developing over time. Any spectrum at any specific moment in time could be a combination of multiple sounds (implicitly also notes/harmonies).

A static spectrum (single ‘note’ for some amount of time) is pretty easy from Time Streching perspective. It could simply be solved by looping the sound.

The problems arise when the spectrum does change over time. The Time Stretching concept is modulating (lengthening) the Time dimension, without modulatiing/effecting the Frequency Spectrum, relative to the time.

Audio sampling is just a way of recording an audio signal and therefore implicitly it’s spectrum. Creating a long row of samples. Each sample representing a fixed timeframe (of 1/samplerate).

So, the (development of the) Spectrum, relative to the time should not change. Only the Time dimension should change. Simply lowering the playback rate of the sample playback is in fact equivalent to lowering the frequency. Implicitly also lowering all frequencies in the spectrum. So, that’s not an option.

What we would like is for each sample or bit of spectrum to last longer then the original timeframe by some factor.

A sample is meant to represent a fixed timeframe of 1/samplerate. So, stretching the whole chain would leave you with gaps between samples.

Ideally you would fill the gaps using some fancy smooth spectral transition, e.g. interpolation using FFT/iFFT.

But you could also fill these gaps towards the next sample with simple copies of the current sample/timeframe. Just the one or maybe also some subsequent samples in a some timeframe. Then just keep playing the same sample or timeframe until a new sample / timeframe is reached in relative time.

This is way simpler to implement then an FFT/iFFT based approach. E.g. by using audiorate sample&hold and delay/echo (with or without per sample buffer reset) or using the read head / buffer of a sampler. This is also the basic principle in a granular synthesis solution.

You just walk over the timeline at some rate relative to the original rate, looping a sample or tiny time timeframe that exists there at that point in time. Maybe smearing things out more evenly using a reverb (basically a complex delay).

Here’s @Omri Cohen demonstrating / explaining some basic sampler / looper based Time Stretching using PatherCap (commercial plugin that also has Time Stretching built in).

Time-Stretching with PantherCap - YouTube

2 Likes

You can also create timestretch effects with the free Prince of Perception module. Use a short delay time (<200 msec), switch the freeze on and slowly modulate the scan control to move through the buffer. I made a dodgy tutorial on this, I don’t have Omri’s silky voiceover skills but you get the idea!

7 Likes