Hello!
I am developing a “feedback loop utility” module. It consists of a single stereo input and 3 channels, where each channel has a stereo output (“departure”), stereo input (“arrival”), a delay knob and a gain knob. At all times, the departure output contains the original input signal and the arrival signal, delayed by the given amount (from 0s to 3s) and multiplied by the given gain (from 0% to 100%). The idea is the user can create their own effects chain in between the departure and arrival and have the signal going through it feedback (controllably, with the gain knob), making for some quite interesting effects. However, with nothing in between, just two (for a stereo pair) cables going from the departure to the arrival, it should function similarly to a delay effect.
To implement the tapped delay line functionality I am using a std::vector<std::pair<float, float>> memory
(as in, a buffer of stereo samples) of size 3 (maximum delay time)* samplerate
. Each channel has a vector like this and a size_t write
, which points to the current sample in the buffer being written to. I believe my process
code below is similar to using a ring buffer(?) I could be wrong though:
#define DELAY_MEMORY_SIZE 3
/// ...
/// inside of process:
#define DELAY_TIME clamp(params[DELAY1_PARAM + i].getValue() + inputs[DELAY1_MOD_INPUT + i].getVoltage(), 0.f, DELAY_MEMORY_SIZE*args.sampleRate)
#define GAIN clamp(params[GAIN1_PARAM + i].getValue() + inputs[GAIN1_MOD_INPUT + i].getVoltage(), 0.f, 1.f)
for (int i = 0; i < 3; i++) {
Channel& chan = channels[i];
size_t size = chan.memory.size();
chan.memory[chan.write] = { inputs[ARRIVAL1_L_INPUT + i*2].getVoltage(), inputs[ARRIVAL1_R_INPUT + i*2].getVoltage() };
// how far back in the ring buffer we have to go to reach the appropriate delayed sample
size_t setback = (size_t)roundf(args.sampleRate * DELAY_TIME);
size_t delay_location = ((chan.write - setback) % size + size) % size;
std::pair<float, float> delay = chan.memory[delay_location];
outputs[DEPARTURE1_L_OUTPUT + i*2].setVoltage(inputs[INPUT_L_INPUT].getVoltage() + delay.first*GAIN);
outputs[DEPARTURE1_R_OUTPUT + i*2].setVoltage(inputs[INPUT_R_INPUT].getVoltage() + delay.second*GAIN);
chan.write++;
chan.write %= size;
}
#undef GAIN
#undef DELAY_TIME
/// ...
This works to some extent, but introduces weird “skipping” artifacts every so often. Sometimes an old echo will get amplified, sometimes a relatively new echo will die out immediately. This is a simple example recorded with a sine oscillator being played as the input. The expected behavior is a neatly echoing signal, where each echo takes exactly as long to die out and doesn’t suddenly get cut off in the middle. I checked practically every variable whose value could give a clue as to what’s wrong and nothing seems suspicious, which is why I’m resorting to writing here.
Moreover, this solution in general is quite memory-inefficient, 15MB taken up by a single module? We should do better! So I’m also curious about suggestions of completely rethinking the delay line.
Just to reiterate, I’m not working on a full-fledged delay effect, I just want to be able to mix in a delayed version of a signal to itself.
I can provide more samples from my code / answer any questions.