I’ve been developing a module to generate rhythms. It produces 8 channels of triggers and I’m interested to hear opinions on how accurate users might expect their trigger outputs to be.
Currently the sequencer is using around ~2.6% CPU when the triggers are accurate to 1 sample. However, at 1/2ms accuracy it uses ~0.1% CPU, and at 1/8ms accuracy it uses ~0.5% CPU. I will likely make this configurable, but I’m trying to decide what I should set this to by default.
If it were me (and it’s not) I would just divide by 4 for .1ms. plenty accurate, plenty low on the CPU. I wouldn’t add a config, but if you feel like it, why not?
well, I’ve read the results of experiments, and done some myself. but it you’ve looked into it and find otherwise, cool, no need to clap. But if you haven’t done any experiments or read and papers about it… Then it’s not a fair clap fight, is it?
oh, sorry, I thought “Ok?” followed the three clap emoji was sarcasm. OP asked "I can clock at 22.6 microseconds (one sample at 44.1k SR), but I can save a lot of cpu if I clock lower. what are other’s opinions. I replied that I thought you could go all the way up to 1000 microseconds and it will still sound very tight. I took your repy to be disagreement with my original reply.
Latif, it only matters because these things cost CPU, so the relevant question is: When is the latency low enough - “enough” being the important word. Not burning CPU to achieve something we can’t hear.