=== vcp challenge #77: imagine ===

you are challenged to make a very cool patch in vcv rack to showcase creative use of modules, so other users may get inspired. you can also win eternal fame!

for this edition, the challenge is to make a patch with pachde imagine. (and definitely click thru to the user manual!)

good luck!

== rules ==

  • make a patch in vcv rack and upload your patch to patchstorage.com , and/or in your reply to this thread
  • tag your upload with vcp-77
  • make a video showcasing your patch and upload it to youtube (if possible), or record some audio and upload it to e.g. soundcloud
  • add a link to your video or audio as a comment to this post
  • give feedback on other participants’ patches

deadline: end of the day, wherever you are, october 31st


I can’t wait to see what you all do!

I’d be grateful if you tag your patchstorage upload with pachde-one in addition to vcp-77.

Thank you, ablaut, for proposing my module.


I did one. The video is here. There are details and a patch linked in the video description.


I have a question: is that module able to use animated PNG?

Don’t roast me please… I think it would be just mind blowing… :star_struck:

You can probably open an animated PNG, but it would use only the first frame. Not sure exactly how you would implement the sampling across frames – I guess just flip and continue at the next sample point according to the trajectory. The only gain I see is a more entertaining visual (the Computerscare plugin has a module that plays animated PNGs. In fact, I looked at that code to understand how to work with images in Rack.). As far as usefulness of the generated signals, I think it would only make it more chaotic.

Please try it out and see what you can find. Interesting photos may not make the most interesting music. Try blurred or posterized pictures or cartoons. Try some of those phone pics of the inside of your pocket. If you have some graphics skills (or know someone), or even not, create images just for driving Imagine.

1 Like

#1. I’ve had a go with Imagine before, but it’s a weird module, because… I’m tempted to associate scanning through data with something that would make for cool, instant drones but it does not do that, at all :slight_smile: The output is so nervous and chaotic and I guess I’m not good at nervous music, huh. BUT at the same time, it makes me want to turn the very image into sound and not use the module as a source of randomness. So, I tried to tame it. Into some mellow Krell type-of-thing.

Heavily cut down Imagine’s triggers which S+H into Rampage for random envelopes, the EOCs of which sample the R and B values from the image to be used as pitch for Energy and every Scanner’s linebreak sends a random offset to Energy’s frequency (via µMAP to access the knob to stay in harmonic ratios). Also, every linebreak does a recording of that voice, played back at .5 speed. Values from G get smoothed out and go into XFX. Some other bits and bobs, delay and reverb to taste;)


Here is my jazzy entry that uses Imagine to drive almost all the elements of the patch.

One drum voice gets gates directly from Imagine processing the chroma values. Gates for three additional drum voices are derived by different permutations of RGB pairs passing through a comparator that fires a gate when one signal is higher than the other. The chaotic gates are then conformed to a musical grid via a clock with sample and hold. The only voice that does not use Imagine is the hi-hats - The clock passes through one Bernoulli gate to determine if the hi-hat will strike, and another Bernoulli gate to decide between open or closed.

The melodic line uses the slewed green signal to drive the new Intervallic Pair Quantizer from CuteFox Modules. The resultant V/Oct sequence is doubled with one voice offset 2 octaves and a fifth. Another Bernoulli gate determines whether the gate is sent to the bass or treble channel of the polyphonic FM-OP. The FM depth is subtly modulated by the slewed minimum value of the red and blue signals. And that pretty much sums up the entire patch.

I did not do anything special to rhythmically align the melody with the drums. To my ears it just seems to work.


I have some seamless animated PNG of Mandelbrot and Julia sets and I thought it could be worth a try to see what those could sound. I already know that turning a Mandelbrot set into sound without any embellishment is absolutely boring but adding some variation (always related to some fractal rules) can turn into something interesting and way less boring.

BTW: is there a limit to the number of imagine modules we can use?

  • Ablaut’s challenge statement specified no limit on the number of modules (Imagine or not).
  • In the pachde One plugin, there is no coded limitation on the number of Imagine modules you can have in a patch.
1 Like

Like this patch very much, thank you for the inspiration! Great idea to clean up the jittery signals coming from Imagine with a LPF.

Have no clue what the CuteFox actually does and how it is being used, even after reading the manual. Can it be explained in simple terms?

As I understand it, the original intent was for the pair of intervals to establish all possible pitch values. Starting from 0V you go up the 1st interval, then the 2nd, then the 1st again, etc… Going down from 0V I think you first go down by the 2nd interval, then the 1st, then the 2nd, etc., though I am not positive. This would establish all possible pitches, and then for any given input, the quantizer would pick the “legal” pitch closest to the input. So the quantizer would be absolute - it would always pick the same pitch for any given input.

But the developer struggled with implementation, and ended up doing something different. The set of available notes is established based on the most recent quantized value. In other words, the last quantized pitch becomes the new 0 point. So every time you quantize a note, it is relative to the previous quantized value, and a given input may produce different notes depending on what was played previously.

For example, suppose the quantizer is set to a perfect fifth for interval 1 and a perfect fourth for interval 2. Starting from 0V you feed it value y that quantizes to the fifth plus the fourth.

But suppose instead you first feed an intermediate value of x that quantizes to the fifth, and then you feed value y. The quantizer now starts from the fifth, and then quantizes up another fifth (the 1st interval). So in this case value y yields up two fifths from 0 instead of a fifth + fourth.

The end result is some interesting key changes as the inputs jump around.

I am actually working on an a similar idea of quantizing based on intervals, but am using absolute quantization instead of relative. In addition, my quantizer will support up to 10 independent step intervals, and the minimum interval step size can be any equal division of any pseudo octave. I think it will be extremely flexible. capable of producing both beautiful and horrible results. I think anyone interested in micro tuning and/or non-octave scales will be interested.


Thank you so much! I spun out this topic into another thread to not water down the VCP-77 challenge.

if you put the link on its own line, with an empty line before and after it, the video will be embedded in your comment

well, you tamed it quite well. the result is pleasing, and something i could see myself producing.

one does of course not have to use all the output of the module, and judicious use of sample-and-hold and similar techniques will lead to more harmonious results.

i haven’t experimented with the module myself yet, but i’m wondering how far we can slow down the speed, and how things like slew can mold the output.

1 Like

interesting stuff! i discovered a new module.

Thanks. I knew that, but it’s been a while since I posted on here, so I was lucky to remember the format for linking at all.

the actual first experiment with Imagine:

(also, if anyone has any ideas how to make a drone-y, soft kind of pipe organ happen, I’ll be happy to hear them:)


Tip for slowing the speed: Right click the speed button and manually enter very small values.


Here is another quick patch exploring the use of Imagine as a VCO.

I drive the play head Y coordinate with a relatively fast LFO to establish pitch, and the X coordinate uses a slow LFO to vary the timbre. I like how using two different signals (red and green in my case), gives a stereo image.

My patch has two voices, but Imagine is not polyphonic, so it requires two instances of Imagine.

The Proteus sequencers send their V/Oct pitch information to the Y LFO to establish the melodies. A bit of LFO is mixed in with the high voice pitch information to add a tiny bit of vibrato.

I think Imagine is functioning very similarly to a wavetable VCO when used like this.


I’m out of my league on this but couldn’t help taking up the challenge. I’m much better with circuit design or coding but the Imagine module looks so interesting!

Using the Zoxnoxious analog synth it seemed only fair to use pics of the boards and the analog IC chips. Conceptually the synth is playing itself: the 3340 pic is hooked up 3340 VCOs, the 3372 VCF/VCA pic drives the 3372 chip. I weaseled out on driving the synth voice with an IC pic and just used a circuit board pic as that gave a lot less randomness. It’s pretty raw, no effects, just recorded straight from the synth. Ok, it’s really raw. yup.

I never realized how my poor soldering job on those chips would literally make it into the audio path.