Python scripted generative Plugin? Looking for feedback, collaborators

Interesting idea, and thanks for pointing out Lua. Seems like it really shines when it comes to ease of embedding, which is known to be a nightmare with Python.

I never paid much attention to Lua as a language, and looking at examples a lot of stuff feels quite weird. This is by no means objective of course, as I work with Python almost every day, and don’t recognise it’s quirks any more. This said, I’ve done generative music in Python, and it was an extremely smooth experience, especially due to things like generators, the random package and the vast landscape of libraries. I’d love to convince a few people to give it a try, as most seem to consider it exotic.

Given the feedback and alternatives in this thread so for, I probably won’t follow up on the original idea. If I’m to start a module of my own, it would probably be a fork of TrowaSoft’s OSC module, which provides a reasonably smooth bridge to Python so far.

3 Likes

didn’t see this before, but I released a module about 4 months ago that uses duktape to embed javascript into a prototyping tool, with inputs, outputs, etc:

it’s not fast, and not something I’d recommend other than to play with, but I figured I’d at least mention it.

2 Likes

I just found out yesterday there was a forum (and more importantly this dev subforum) and I’m super happy this is the first thing I see!

I’m python… capable (I’m a grunt in the film business by trade and code for pleasure and pain but it’s my main language and i’ve been using it for about 12 years now)

this is one of the first things i wanted to try doing so i can’t wait to mess with it

I first got the idea in a more concrete sense when I saw this:
https://monome.org/docs/modular/teletype/

so hello forum, glad this is a thing that exists

1 Like

@paulagostinelli Unfortunately you are probably the first one on the forum (besides me) who finds the idea exciting. In the light of the feedback I most likely won’t build that module. But you’ll find some nice workarounds for connecting Python to rack in this thread. Using Python to send MIDI is an alternative, too, if you don’t need a back-channel (from rack to Python).

1 Like

I think it would be more useful if the scripting side can process audio data as well. I did a quick hack to use Lua for example to create a mixer and an echo:

You can even create your own input/output knobs and labels from within the Lua script. Needs still knobs, but then you could prototype full modules. Lua is fast enough for running in the audio thread, the echo example needs less time than a fundamental VCO module.

Python is fast enough as well, if you process the data in blocks and use fast libraries like NumPy. This will add a little bit of latency, but shouldn’t be a problem for most modules. But Python can even create audio in realtime. See this example:

It needs about 12% CPU usage on my PC for creating a FM modulated signal, without much optimization, calling the “sample” function for each sample. I guess with something like PyPy it would need much less time and might be possible to implement the “process” call even for more complex modules all in Python.

2 Likes

I’ve been working on a product for Reason (Started with V9) but never released it. It’s using reasons remote codec and reasons feature to export all remotable items (knobs, matrix diplays … and whatever UI you can interact with) into a CSV kind of file. I’ve been parsing this and creating a DSL (domain specific language) to interact with Reasons remote codec for controlling the devices over MIDI and receiving MIDI input. You’ll find some videos over here:

I’ve been able to create UIs in reason and wire in the script. The scripting is external as Jupyter Notebook and sends/receives midi messages. I would totally be willing to work in something like this and releasing it for Rack. Does Rack or the extension offer some model to describe what the buttons are called? What I’d like to see in Rack is a way being able to uniquely address a module by an ID or path (similar to OSC, or exactly with OSC). Something like an API I can call. Which could be midi, and then yet on top something that can list the properties of a module and all the modules loaded.

I’ve also been thinking about releasing my work as OpenSource (I already got the agreement from Propellerheads … sorry now Reasonstudios, that releasing as opensource is fine for my python based remote codec), but never did. Mainly because I haven’t been clear wether I want to monetise on this and because I wasn’t satisfied with the code quality.

@Vortico is it possible to have a RPC or API that can list devices. Or a websocket, or some other low latency publish subscribe when modules get created/deleted. Or and endpoint returning a JSON description of how the module looks like.

My idea is something like this:

Then I could access the two macro oscillators like /myprojectname/macro-oscillator-2/1/frequency

my python script (after some setup and initialization) would look like this:

m1 = vcvrack.projects.myprojectname.macro_oscillator_2[1]
m2 = vcvrack.projects.myprojectname.macro_oscillator_2[1]
m1.frequency = kHz(32.103)

# or when using midi:
m1.frequency = 103 # whatever value is appropriate

I’ll dive into the code when I have some time or wait wether some of the devs here on the board can give me the proper answer. Would love to contribute something back to VCV.

To be clear: I can totally use the existing midi approach, but I would be a lot of setup work upfront. May there is a good approach or already existing one, that declares in a yml, json, xml or whatever structure that is parseable how a modul looks like.

1 Like

Sending midi is easy using the integrated midi modules. For my test I’ve been using the midi map.

from rtmidi import MidiIn, MidiOut
from rtmidi import midiconstants as mc
from random import randrange

def openport(port_class, classname="Script", port=1):
  midi = port_class()
  available_ports = midi.get_ports()
  port_name = f'{classname} {port_class.__name__} {port}'
  if port_name in available_ports:
    midi.open_port(0)
  else:
    midi.open_virtual_port(port_name)
  return midi

rack = openport(MidiOut)
channel_1 = 175+1

rack.send_message([channel_1, 0, randrange(0, 127)])

With some meta programming and a model of the modules it could read like suggested

macro_oscillator.frequency.rand

But this is just a simple example. I also had a look into the core code. Python 3.8 Assignment expressions could be used to create routing from in- to outports. Just another way of scripting setups :wink:

I am very interested in this as well!

I created a python composition environment called pyComposition. It is a real-time python environment for parametric music composition, modeled on common music. It supports real time MIDI output (rtmidi) and csound.

The main idea in developing this was to explore algorithmic composition and develop a library for behavioral and process music. It works great sending MIDI to vcvrack. It would not take much to add Open Sound Control support to this. I would also like to integrate real time DSP audio packages like pyO to create audio in a python environment.

It would be very cool to be able to control routing and patching through some python interface. Live coding and control of vcvrack is an interesting idea as well.

2 Likes

indeed! I need to take a look at this.

Beware, I’m probably completely missing the point here and cutting lots of corners, it’s just a thought.

Stoermelder’s 8Face can kind of take controll over any module. What if you’d add soccets to such a module, for example NNG https://github.com/nanomsg/nng . Then every (scripting) language that is supported by NNG or has bindings can be used to control any module.

A new (experimental) module could just exist of a faceplate in- outputs and parameters. Put the scripting module next to it and control it all with python et all.

You wouldn’t need to embed anything. Python et all. could run on another processor or even on an other box.

1 Like

That would do the trick. I suggested something similar with an API or TouchOSC host, but anything lowlevel like NNG would be perfect. The rest could be built ontop. The only thing would be some naming/adressing schema necessary. Or showing the internal module id in order to know which module to address

What you are talking about is a way to script vcvrack by wrapping the c++ module creation/mgmt api code with a high-level language (e.g. python). This is theoretically doable but no doubt quite a lot of work.

The best use-case would be to run vcvrack in a gui-less ‘headless’ mode like libpd which is pure data’s under appreciated killer app. I wrapped libpd once in cython in an experimental burst, and was reasonably happy with the result. But because you are dealing with realtime audio programming, it can be quite tricky to navigate the differences in the threading models.

Another difficulty of embedding python is, as Andrew said, that it’s a relatively large library (so maybe you shouldn’t embed and make the scripting a python module). Another point again embedding is that it also requires special care to navigate the Apple/Windows packaging requirements which are a soul-destroying pain. I learned this the hard way in my on-going attempt to embed a python interpreter in Max/MSP via a max external.

So yes, it’d doable. See cython and pybind11 if you wish to dig deeper.

Ultimately, if there’s a native OSC interface that can allow for rack setup, module creation and connections, then that would be best.

Are there really that much more possibilities in using Python over the languages that are offered by VCV Prototype?

Not really. But I wasn’t actually suggesting to use it to build or prototype modules. My inclination was to use it to script vcvrack itself, but then again it’s not a walk in the park to embed for all the reasons mentioned in my earlier post.

Perhaps a minimal language designed for embedding like lua would be more appropriate in the end.

The rack holds modules. Modules exchange data over cables. If you want to bring in some programmable logic, you have to use one or more modules, or send data to I/O-modules using MIDI and/or OSC and/or audio/CV signals.

One way is to use a scripting language inside a module, another way is to run the scripts outside the rack and generate MIDI and/or OSC and/or audio/CV signals.

The more I think of that, the more I would suggest using SuperCollider https://supercollider.github.io

1 Like