This Song Is For My Sisters Who Need to Eat Lunch to Remain Alive
A jam I recorded before lunch, and uploaded during.
I’m kinda re-learning the hard way that just because generative music can evolve forever, it doesn’t have to, you make better techno when you repeat things a lot.
Given that I often perform using a graphics tablet, I wonder which makes for better videos - a zoomed out rack I don’t have to scroll, or one sufficiently zoomed in you can follow what I’m doing better? Patching virtual cables around makes for good spectacle, but I’m not expecting the audience to keep a mental model what’s happening, so maybe I should rearrange things to fit a zoomed out view. I don’t need to be able to read the labels to remember what the knobs and jacks do.
Anyway, this series of songs using a slowly evolving pretend hardware rack is fun. I’m a mediocre keyboardist and only use keys to sequence songs, so modular is about the only way I can be a live performer. With the emphasis on randomization, I have to play along the random voltages chosen for me, and I steer the song more than I perform it.
As always, patching the same slowly evolving personal setup.
Mind Plasticity is a Psyop. You Can Rotate a Dodecahedron in Your Mind at 80.
Using a step sequencer as an arpeggiator to make a kinda house sorta beat. I think I’m starting to fall into the trap of always patching my setup the same way, instead of using super creative techniques you must use when you know you’re constrained by the absurd cost of the hardware.
I think I should discard this setup soon, try out doing things that do not embrace hardware constraints, and fold back techniques I learned & modules I bought into sequenced songs without aleatoric elements.
I’ve ditched the fixed rack, at least for now. I’m trying to fold back randomization techniques into more controlled and listenable songs.
Let’s Agitate the Funk in the Laboratory Tonite
This is really just a sketch for a more polished song I have in mind.
Vult’s Dopamine (From the paid Mysteries pack) is the backbone of the aleatoric sequencing here. The Foundry module was used to sequence the loop, but it’s no longer used to play notes in what I recorded.
The module isn’t in the library yet (just submitted it yesterday), but I made a demo song for my newest module, Arcane. I’ll post it in the release announcement, but I wanted to share it here too, couldn’t bear to sit on it for a few more days.
Oh, and here’s the patch, if you built/downloaded the beta: Arcane Demo.vcv (208.4 KB)
I’m still trying to come up with my perfect fixed system, so I can really learn how to play it, and map it to external controls without fumbling around to remember what button does what. I think it’s close to perfect for me now.
Here’s a quick live jam, entitled (Yoshi voice) Yoshi.
Took a few attempts to get an acceptable performance, and even then I forgot to do a few things I had planned. Some blunders also ruined takes, e.g., tapping the mute button too long on my keyboard and solo-ing instead.
Besides what you see in the vid, a few utilities and I/O are stashed away from the scrollable area, locked with UnDuLaR.
That was an interesting experience, I’m proud that I managed not to make it sound like horrible noises too much.
Even if it was just a handful of people, knowing there’s an audience immediately changes how I interact with the instrument, there’s no more muting channels to isolate problems. I was often doing things not being quite sure they had an effect at all, and not knowing my setup so well, which made for a very conservative performance. Still, it gave me a lot of thoughts how to improve at this. I’ll be doing more streams once in a while.
Yeah, I was still uploading it to youtube! I added it to the post.
It seems there’s supposed to be a tool to link twitch and youtube accounts, but I didn’t see it on mine… maybe it’s a service you have to be grandfathered in, with amazon and google and other feudal masters always having little proxy wars at our expense. Takes forever to upload it yourself.
One thing I couldn’t show in the video above but might be of interest is how I integrated Reason 11 to the performance. See also my dedicated thread about this:
First, the Kong drum machine. I’m using Nektarine since it’s the most reliable of them all for instruments. Very straightforward setup, I send it notes it outputs to two stereo pairs. (Kick/Snare, and Hi-hat/Percs)
The other one is more complex. I use Elements, since I need to process audio from VCV (Nektarine can’t do that) and in that Elements I have two Reason instances (the Reason VST has only two pairs of Audio inputs, so I need two instances).
The second instance contains the “master” section, and by master I mean I just squash this mess with a brutal limiter. It also contains a Neptune, to auto-tune incoming microphone audio, with the industry standard Cher Effect to sound like a vocoder. It’s then recorded and live lopped in VCV’s Luppolo3 for texture.
Look at this! Just found out your stuff while looking for fixed rack examples and philosophies. Very interesting. So are you still using a fixed rack? Maybe you expanded the one you started with?
Sure do, same evolving setup for half a year that I use every so often. It went through many changes since the start. I swap things in and out all the time, so it barely resembles how it started out. It’s the one I use in my occasional streams (see a few posts above).
Mainstays are everything Vult, GTG mixers, BPM LFO, all those 3hp Bogaudio utilities, Erica Black Wavetable VCO, Plaits, Impromptu Clocked, and of course, the Turing Machine.