My patchy dirges

This Song Is For My Sisters Who Need to Eat Lunch to Remain Alive

A jam I recorded before lunch, and uploaded during.

I’m kinda re-learning the hard way that just because generative music can evolve forever, it doesn’t have to, you make better techno when you repeat things a lot.

Given that I often perform using a graphics tablet, I wonder which makes for better videos - a zoomed out rack I don’t have to scroll, or one sufficiently zoomed in you can follow what I’m doing better? Patching virtual cables around makes for good spectacle, but I’m not expecting the audience to keep a mental model what’s happening, so maybe I should rearrange things to fit a zoomed out view. I don’t need to be able to read the labels to remember what the knobs and jacks do.

Anyway, this series of songs using a slowly evolving pretend hardware rack is fun. I’m a mediocre keyboardist and only use keys to sequence songs, so modular is about the only way I can be a live performer. With the emphasis on randomization, I have to play along the random voltages chosen for me, and I steer the song more than I perform it.

1 Like

It’s really interesting to watch you mouse pointer search for different sounds :slight_smile:

1 Like

Trying out the “everything on a single screen” approach, making the setup smaller every time, haha. There’s still one row you can’t see but it’s mostly I/O and visualizers.

Here’s a terribad Christmas Song :^) :

The current fixed rack, if you’re curious - you’ll need the paid Vult modules to make it work, and a module in development was replaced with a blank plate - 2019-12-24B2.vcv (2.5 MB)

Starting to get a really stable selection of devices I understand, and to feel like a personal instrument

Still trying to get the most out of my pretend hardware system:

!!EMERGENCY SUPERSAWS!!

When using VCV it seems I often end up with sounds that are very harsh on headphones, but fine on monitors. I went for an aggressive sound, but on headphones that bass is way too much.

1 Like

As always, patching the same slowly evolving personal setup.

Mind Plasticity is a Psyop. You Can Rotate a Dodecahedron in Your Mind at 80.

Using a step sequencer as an arpeggiator to make a kinda house sorta beat. I think I’m starting to fall into the trap of always patching my setup the same way, instead of using super creative techniques you must use when you know you’re constrained by the absurd cost of the hardware.

I think I should discard this setup soon, try out doing things that do not embrace hardware constraints, and fold back techniques I learned & modules I bought into sequenced songs without aleatoric elements.

1 Like

I’ve ditched the fixed rack, at least for now. I’m trying to fold back randomization techniques into more controlled and listenable songs.

Let’s Agitate the Funk in the Laboratory Tonite

This is really just a sketch for a more polished song I have in mind.

Vult’s Dopamine (From the paid Mysteries pack) is the backbone of the aleatoric sequencing here. The Foundry module was used to sequence the loop, but it’s no longer used to play notes in what I recorded.

1 Like

I don’t want to make a thread for my modules before I have more than a tiny split, but I wanted to share an incoming feature: sort by voltage.

I have no clue whether it’s any useful! I have no use case in mind whatsoever!!!

But notice how it’s altering the timbre of the patch when I turn splitting on and off? SCIENCE. I have no idea how it works, by the way.

Next step in the split and merge series: split and mergers that can daisy chain the sort order of the first module in the chain - one possible use would be to sort V/OCT and Gates together.

I’ll make a real announcement thread once it’s more than a lone learning module, promise.

Loving the sort buttons, seem to add a sweet amount of space and/or variation to poly lines :+1::+1::+1:

The module isn’t in the library yet (just submitted it yesterday), but I made a demo song for my newest module, Arcane. I’ll post it in the release announcement, but I wanted to share it here too, couldn’t bear to sit on it for a few more days.

Oh, and here’s the patch, if you built/downloaded the beta: Arcane Demo.vcv (208.4 KB)

2 Likes

I’m still trying to come up with my perfect fixed system, so I can really learn how to play it, and map it to external controls without fumbling around to remember what button does what. I think it’s close to perfect for me now.

Here’s a quick live jam, entitled
(Yoshi voice) Yoshi.

Took a few attempts to get an acceptable performance, and even then I forgot to do a few things I had planned. Some blunders also ruined takes, e.g., tapping the mute button too long on my keyboard and solo-ing instead.

Besides what you see in the vid, a few utilities and I/O are stashed away from the scrollable area, locked with UnDuLaR.

Might start doing semi-regular streams sometime, like all the cool kids. Would be that kinda slow burn weirdo rando techno with a few horrific unplanned noise interludes.

More from the fixed rack. Added a new row to it since last time, to make room for my friend Mog’s sequencer, which drives this entire jam entitled

Just :honeybee::cool::dog:

2 Likes

Trying out that live streaming thing right now! I have never done this before and little clue what I’m doing, trying to learn how to perform my new and improved fixed rack.

1 Like

And it’s over! Thanks for coming.

That was an interesting experience, I’m proud that I managed not to make it sound like horrible noises too much.

Even if it was just a handful of people, knowing there’s an audience immediately changes how I interact with the instrument, there’s no more muting channels to isolate problems. I was often doing things not being quite sure they had an effect at all, and not knowing my setup so well, which made for a very conservative performance. Still, it gave me a lot of thoughts how to improve at this. I’ll be doing more streams once in a while.

Here’s the replay:

twitch: https://www.twitch.tv/videos/586045054
youtube: https://www.youtube.com/watch?v=3v5JTpkD6pU&feature=youtu.be

It was fun. Also note that replays only last about 20 days so that link will eventually point nowhere.

1 Like

Yeah, I was still uploading it to youtube! I added it to the post.

It seems there’s supposed to be a tool to link twitch and youtube accounts, but I didn’t see it on mine… maybe it’s a service you have to be grandfathered in, with amazon and google and other feudal masters always having little proxy wars at our expense. Takes forever to upload it yourself.

In the Twitch video manager there’s a submenu that lets you edit download etc. the export option is what you want.

Another little live stream starting now! As always, minimal preparation, we’ll see how it turns out


Edit: The stream is over! Here’s the replay on Youtube:

It went to pretty harsh, glitchy dissonant territory this time. A few computer performance issues on this one, so I stopped the stream after an hour. Sorry for the few video freezes.

Here’s also a short excerpt on Soundcloud if you prefer:

1 Like

One thing I couldn’t show in the video above but might be of interest is how I integrated Reason 11 to the performance. See also my dedicated thread about this:

First, the Kong drum machine. I’m using Nektarine since it’s the most reliable of them all for instruments. Very straightforward setup, I send it notes it outputs to two stereo pairs. (Kick/Snare, and Hi-hat/Percs)

The other one is more complex. I use Elements, since I need to process audio from VCV (Nektarine can’t do that) and in that Elements I have two Reason instances (the Reason VST has only two pairs of Audio inputs, so I need two instances).

The first instance processes the blue and orange buses from the GTG mixers. The blue bus is always set to pre-fader and the orange one to post-fader.

The blue bus mangles incoming audio, picking a different effect each 1/4th note.

The orange bus is the standard chorus > delay > reverb.

The second instance contains the “master” section, and by master I mean I just squash this mess with a brutal limiter. It also contains a Neptune, to auto-tune incoming microphone audio, with the industry standard Cher Effect to sound like a vocoder. It’s then recorded and live lopped in VCV’s Luppolo3 for texture.

2 Likes

Look at this! Just found out your stuff while looking for fixed rack examples and philosophies. Very interesting. So are you still using a fixed rack? Maybe you expanded the one you started with? Cheers

1 Like

Sure do, same evolving setup for half a year that I use every so often. It went through many changes since the start. I swap things in and out all the time, so it barely resembles how it started out. It’s the one I use in my occasional streams (see a few posts above).

Mainstays are everything Vult, GTG mixers, BPM LFO, all those 3hp Bogaudio utilities, Erica Black Wavetable VCO, Plaits, Impromptu Clocked, and of course, the Turing Machine.

1 Like