Python scripted generative Plugin? Looking for feedback, collaborators


I’d like to propose an idea and collect some feedback (has someone tried this? is it insane?) – and maybe find collaborators. It may sound a bit off, but please hear me out.

I’d love to make a plugin that has a Python interpreter compiled into it – why and what for?

Rack is mostly about generative music for me, but with the existing plugins I find it pretty hard to experiment quickly with certain kinds of ideas, even simple ones. In a way you have to let the plugins’ capabilities guide you, rather than use the plugins to implement an idea you have. This is a lot of fun, of course, but sometimes not enough. Using tons of basic plugins as a kind of “programming language” typically ends in an insane cable mess, and it does not qualify as quick experimentation. Also, more “musical” stuff (scales, modulation, chords) is challenging with modules.

The way out, of course, is programming. But writing and compiling a dedicated C++ plugin for each “What would it sound like if…?” isn’t really a nice workflow.

Hence the idea of having a simple but expressive language like Python compiled into a plugin.

Some ideas:

  • Obviously, the plugin would not do any signal processing. It would operate in the temporal resolution of a typical sequencer and create a bunch of V/Oct, CV and gate signals triggered by a clock pulse.

  • It could have a bunch of generic knobs or sliders (with CV inputs) whose values can be read from the Python script, much like generic hackable Eurorack hardware modules that run different algorithm (O&C, Mordax Data, …). It should have a small “display” for showing some text. More graphical output is not necessary.

  • It could also serve as a rapid prototyping tool for plugin devs to some limited degree, mainly for prototyping sequencer type ideas.

Why Python?

  • I’ve done generative music with Python before, using MIDI, and I know it works. I could use my MIDI stuff of course, but a plugin would be way cooler and smoother.

  • Building an API for musical concepts (scales, modes, chords, transposition, sequence mutation) is quite straightforward with Python. I’m happy to contribute what I’ve built in that space.

  • Python is (somewhat) modern, expressive and very accessible, in contrast to some other languages used in generative music. Maybe it can get some people into coding and music who would not touch C++. There are a lot of great resources for learning Python out there.

  • Python is known to play nicely with C++. Embedding Python in a C/C++ project is something that’s documented and has been done before.

  • Python has some language features which are really cool for generative music on board, such as generators, list manipulation APIs, randomness stuff. You can create powerful APIs and do things like:

    sequence = [1,3,5,6,7]
    scale = Dorian('A')
    other_scale = Phrygian('D#')
    transposed_sequence = [other_scale[d] for d in sequence]
    notes_random_oct = [scale[i] + random.choice([-12, 0, 12]) for i in sequence]
  • Python has a ton of great data processing, numerics and not the least machine learning libraries. If anyone is into experimenting with this stuff for creating music, that plugin would be a great entry point. (I actually experimented a bit with linear dimension reduction on classical piano scores.)

Why haven’t I just done it?

While I work with Python on a daily basis, I have not really touched C++ for at least 5 years. I’ve played with the plugin API and find it easy enough, but the whole part of compiling Python into the plugin I find really challenging as I’ve never been good with the compilers anyway. Also, my C++ is probably not up to modern standards anymore.

So, if the idea is not completely insane, I’m looking collaborators, especially those who are good with the compiler (I’m on Mac, btw). I’m happy to do the Python side and the core plugin functionality, but I need help for the deep embedding stuff. I think a have a fairly clear idea about the overall architecture, and I’m determined to keep it simple in the beginning.

(Sorry for a long post.)



Hi! This is a good topic and i support any opportunity to get more freedom for creativity, but i think sending midi/osc signals from script to VCV isn’t a problem and you don’t need a dedicated plugin for it to work in the end?

In this case you can do everything you want literally in any program or programming language as long as they supports the ability to send something out.

Kind of in the same boat you are in that I know python but not c++. There is a formula module that could probably be repurposed if you want to use python specifically. I think rather than trying to fit an IDE inside a module, you would do better to just have the module call a command line instance of the python interpreter behind the scenes. The user could use the menu in rack to select the particular script, then any additional flags or whatever could be added to the text box. Thus, on trigger, it would run ‘python [selected script] [flags]’

Sourcecode for the formula module:

@dataphreak2 There’s a misunderstanding here: I have no intention to embed an IDE in a module! I want exactly what you write: a module that loads and executes a Python script that you edit wherever you like (but not in VCV rack). However, this still requires to compile the Python interpreter which executes the Python script into a rack module. MIDI is ok as a workaround, but it would be cool to have CV inputs and a bunch of knobs to control the script’s parameters. Maybe this can be hacked but a dedicated Python module would be much more smooth.

You shouldn’t have to compile the interpreter. It’s possible to call the system from the c++ binary. You’d just need to create a string and send it as a system call. Now, that may be prevented by vcv, but I don’t think so. The C++ function in question is system(“command”). For example, system(“python -m pip install audio”) Requires that python be a part of the path.

Im thinking more towards a simple solution which you can try to create right now. I fully understand that I propose a not entirely correct solution, and it’s limited, but basic concept can look like this:
You can use OSC module by TrowaSoft and send/receive everything you need in your script. Receive OSC to control scripts parameters and send something back.
As far as i know – there is no MIDI-OUT module in VCV yet, so OSC probably is the only way.
(Besides, you can always install MAX/MSP or VVVV, heh)

Thanks for those ideas! I have to admit that I didn’t know OSC before, which is why I didn’t understand your message. There even seems to be a Python OSC client / server. I’ll definitely have a look.

@dataphreak2 How would I get values from the script in to VCV? According to documentation, system only seems to return the status code. Alternatively, popen seems to let me read the output, but that would mean parsing strings.

I still think an integrated intepreter would be elegant, but I’ll definitely experiment with your ideas.

1 Like

Shantarli’s definitely the best solution. Very simple. No reinventing the wheel. Python loads the OSC library, spins up a daemon, receives on channel, does math, spits back out. What’s more, now your python script is platform independent. Can work with just about any DAW.


@dataphreak2 Just did an absolutely minimal test with the suggested TrowaSoft OSC module and the Python library I linked, which is quite powerful. Both send and receive seem to work really well, so this is the way to go!

Would love to handle string messages for displaying status, but I’d have to fork the TrowaSoft module for that.

1 Like

Glad you’re solved this out. Happy patching! :slight_smile:

Is your idea a generic pre-packaged module with a fixed number of inputs/outputs/parameters (say 8/8/8) that loads a Python file? If you drop support for multiple Python files and external Python libraries, you could save the Python code to internal module data and distribute .vcv and .vcvm (module preset) files that run your code. Users would simply download your “script-running module” and load a Python script or module preset, and the module would run its custom DSP.

IMO Python is a weird choice because it’s not designed to be an embedded scripting language and will be >20MB unless you arbitrarily remove parts of the standard library. But why not give users a choice of lots of audio scripting languages like Faust, Supercollider, cmusic, ChucK, and generic languages like Lua/LuaJIT, mruby, Javascript (through duktape or MuJS), Wren, etc.


@Vortico Honored to be answered by The Master himself! :slight_smile:

Structurally (regarding inputs, outputs, including the script) this is indeed what I’d like to do.

However, I’m not at all interested in DSP. I think the signal level is well covered by existing modules.

I’m interested in things which I find hard to do with existing modules (maybe I’ve been missing some?), and for which Python is exactly the right level of abstraction IMO:

  • Generative music on “note” resolution (i.e. generating pitch, gate, velocity, per-note CV, no audio rate modulation). Exploring forms of controlled randomness, sequence mutation etc which are not easy to do with existing modules. (Examples: Mirror a sequence not in time but with respect to pitch. Create a sequence of sequences where each repetition is some form of transformation of the origina. Use more or less exotic probability distributions and random processes for creating music.)
  • Larger scale musical ideas, especially around harmony. Name me a module which makes it easy to transpose a motif consistently across modes and scales. Quantizers won’t do that because they round pitch rather than working with scale degrees.
  • Play around with machine learning, statistics and music. That’s a bit exotic, of course, but it’s my professional background and I’m curious.

For those things Python is ace due to it’s accessibility, level of abstraction and APIs. I’m not too much into the specialty audio languages you name (and I hate JavaScript). Python is incredibly mature due to its wide adoption and user base, and there is almost nothing for which you don’t find resources. Unfortunately, it will be necessary to include the standard library, as it contains a lot of the goodies. And then we do have the problems you point out.

Hope that clarifies some things. So far, I’m following the “keep Python external, communicate with OSC” path, which is not perfect but still promising.

Not sure what you mean by this. Everything you describe is a subset of DSP.

What I mean is audio rate signal processing, which is not what I’m interested in, and I would not chose Python for, vs “score” or “sequencer” level, where you have concepts like notes, scales, chords, BPM, etc. Of course, academically speaking, you can see all as one and the one can blend into the other, but from a programming point I find some languages easier to work with for one than for the other, due to different levels of abstraction and of processing speed.

Not quite what you want but close: I’ve had good results using Rack alongside Sonic Pi, sending MIDI signals from Sonic Pi to Rack. It’s Ruby, not Python, but the idea is similar to what you describe. It gives a very quick iterative process between editing the code and running it, since Sonic Pi is designed for live coding.

Cool idea! That live coding aspect is really interesting. Wondering whether I could emulate that with Python with some clever module reloading. What I’m missing with the MIDI approach is the back channel (CV from VCV to my code). No MIDI send (yet).

But that OSC solution suggested works really well now! 8 channels in and out is sufficient for my current stuff.

As a first exercise, I used Python to write a “pseudo-polyphonic guitar strum sequencer” that spits our 4 note guitar voicings as short sequential bursts to feed into Rings / Resonator. That sounds really amazing.

What a great news, @dc.schneid! Have you tested v1 build of Rack? Andrew added CV-MIDI module recently. Honestly i don’t know will be 3d party modules work in v1 or not, but maybe it can be useful in your experiments.

Haven’t tested v1. With two kids and a job in tech I have little time on my hands, unfortunately.

BTW, as a first mini-project I’ve used TrowaSoft OSC and Python to write a little “pseudo-polyphonic guitar strum sequencer” which sends 4 note guitar voicings and picking patterns as fast bursts Rings / Resonator. Absolutely gorgeous. I can feed it voicings and picking patterns separately and mess around with both independently. :slight_smile:

Here’s the first bit of music produced with a Python sequencer script, connected via OSC.


Hi david.

I’m definitely interested in generative music through embedding a high level language in a module.

I’m not quite sure I’d pick Python. Have you seen Protoplug : Someone basically embedded Lua (which is small and designed to be embedded language but fairly Python-like in its capabilities) in a VST.

I’ve been able to do some good stuff with it, and it’s certainly suitable for algorithmic composition.

Might be possible to extract the engine of Protoplug and adapt it into a VCV module.