Thanks for sharing your music and the details about using Reason with VCV Rack. The screenshots are awesome, and I agree that the synths sound great. It will be fun to hear other things you do!
Thanks for the nice comments!
Today I spent some time trying out making a voice-controlled song (plus getting TouchOSC rigged up to VCV). Here’s a live jam of it:
Donated my Upload to the Algorithm so my Soul will be Reborn as a Powerful Vocaloid
(please get used to the long and stupid song titles, it is a thing i do)
This one only uses VCV. The patch uses the microphone, detects the current note, coerces it to the C minor scale, makes a chord from it, vocodes the note rhythmically, and plays the chord.
Here’s the patch, if you wanna try it out: https://patchstorage.com/vocoder-song-control-the-entire-song-with-the-mic/
If you try it out, I’m still very new to VCV: any idea how to simplify it? I might be doing things the hard way. Suggestions welcome.
It’s a super fun patch and video. It’s hard to believe that you’re new to VCV Rack. Again, I’m excited to see what you come up with next.
Thank you! Well, I have a lot of experience with Reason, and while it’s a studio rack metaphor, a lot of similar CV techniques are possible (although many users don’t really explore them) so I could get started quickly.
Another track I finished today:
Freshest Blingees in the Malaise Game
Another song using both Reason and VCV. This one is very light on the patching tricks and has no generative behaviors. Once again, just a stack of four synth voices. I focused on making those voices expressive, using velocity, mod wheel and pitch wheel.
The pitch bend wheel thing gave me a bit of trouble. Is there a standard way to implement pitch bend in VCV? I’m just adding the V/OCT and a scaled down PW input, and I had to use a tuner to discover the magic value for five semitones (9.129%). I know the modular crowd isn’t a huge fan of the whole 12-TET thing, but sometimes it’s comforting to have old-school western pitches, you know.
There is some amount of subtle randomization happening in the Reason player devices (e.g. the bass notes during the break are 100% random), so I ensured that the video I took was also the final rendering take. Because of the loopback setup I use, the VCV parts cannot be rendered offline, they have to be frozen to audio first. So if the computer can’t keep up… welp.
I’m glad that VCV names and shames CPU-hungry modules, because sometimes you plop in the most straightforward utility there is, like a LFO, then you turn it the meters and see it takes 5% CPU. I wish Reason did the same with its Rack Extensions, a shame they have a financial incentive not to do that.
(After rendering it, I actually noticed that for some reason one modulation source stopped working - I had to relaunch VCV a few times while writing the song for similarly mysterious reasons. Ah well, it didn’t affect the song this much, and I plan to fix the mix a bit to tame the bells anyway. Worth remembering to always restart VCV before rendering a song)
I feel kinda weird I’m neglecting all of VCV’s potential to use it for more traditional songwriting, but hey, its fault it can compete with all my other synths. At this point, another paid Rack Extension or VST synth would be an almost impossible sell for me, if I want to dial in a preset quick I have a lot of workhorse synths (Antidote, Expanse, Thor, Europa), but if I want to craft a new sound I’ll reach for VCV almost every time now.
Tonight I patched up and performed a quick little live techno-ish jam, completely different from the previous song I posted where I use VCV as a collection tamed synth voices in a sequencer, in this one I embrace randomization fully.
Put Your Hands Up for Staying Home Tonite
I bought the main Vult plugin today, and thought it’d be fun to use its many rude filters to make some techno-ish thing with the resonance cranked to the max. I’m using a 15 years old garbage MIDI controller where the knobs no longer react properly and send wobbly data, but it just adds to the character of the song.
I mostly see this kind of experimentation as something that will enrich my sequenced songs, but I think if I want to continue performing with VCV I should really make myself a standard system, a fixed rack I learn inside-out, map to my MIDI controllers and TouchOSC, something I can perform without the mental overhead of remembering where to find the controls I need.
For the last few days, I’ve started to put together my own virtual modular system, my very own instrument, something that feels like I crafted it rather than a one-off patch, something I can learn and perform live.
It seems most people make new songs with VCV starting with an empty rack they fill with what they need and nothing more, while people who use modular hardware have setups that evolve slowly, limited by the money they can throw at their hobby. I’m not big into formalism and genre essentialism so I think any approach that yields fun results if fine, really. And since I came to VCV from Propellerheads’ Reason, the first approach makes a lot of sense to me.
I’m young enough that software is the real thing to me. Software is what I’d rather use. I’m not using VCV as a stepping stone to buying €18954 of inferior gear with patchcable my dog would chew up and that lacks total recall.
But I think it’s also fun to craft your own modular system, make difficult choices to put up a system that’s versatile but small enough to wrangle, to learn it in-depth, to know it intimately, to push it to its limits… So I decided to do that in the virtual world, and to also map it to TouchOSC so I can can control it from phones and tablets, in addition to my MIDI controllers.
So, it’s still a work in progress, but here’s my system! It’s limited by my computer specs (takes 50% CPU when idle) and the complexity to navigate it, but hopefully versatile enough to get a lot out of it. It’s highly focused on randomization and on slow evolving techno jams. No serious sequencer, if I want one I’ll grab other software rather than do it in the rack
Here’s a pic of it:
(I don’t trust this forum to let you see the full size pic, so here’s an alternative at 100% zoom: https://aria.dog/upload/2019/10/system.jpg )
It’s not set in stone yet, or ever, so feel free to get indignant I didn’t add a module you consider essential!
And here’s two random TouchOSC pages, so I can perform it from tablets:
I haven’t mapped it all, but I started jamming with it, and I like the results! Here’s a quick excerpt of my experiments with it yesterday, improvised live:
As I have two sound cards, I’ve also rigged up a headphone mix, so I can preview things before I cue them up, hopefully I can use that to make smooth sounding sets. Excited to finish mapping it to touch controller, learn how to operate it in depth, and try to get fun performances out of it!
I’m in the former camp but I suspect that if I had any kind of controller then i’d move towards the latter. So much of what we do in the rack is creating systems that decide which notes to play, when you’re personally playing the notes (or using a hardware sequencer) the rack becomes more of a soundsource.
But the Stars but the Stars but the Stars
A little experiment with PdArray to try to turn it into a scratchable DJ turntable. Can’t say I’m very happy with my results, but there’s the building blocks for a fun idea there.
Scratching works better with the mouse than touch control, honestly, but since I went through the trouble to make a TouchOSC template, welp,
Samples are from a public domain recording of a public domain book.
The video is desynced a bit from the audio. Sorry about that! It should be somewhat in time, still. Sample rates are a pain.
This is the core of the patch: multiple strips trying to imitate a real pair of turntables, with knobs for scratching and buttons to mute, spin up and spin down.
I’ll distribute the patch if there’s any interest but I’d rather see someone implement the idea more cleanly than I did.
So, the “fixed rack” experiment I posted about a few posts above turned out interesting… but too CPU hungry not to crackle all the time.
Also, I found that using using TouchOSC is a bit hit or miss. Bidirectional sync has issues, and the layout editor is not user-friendly so it takes forever to get anything done. In the post above, you can see well how it’s more fun than it is practical to use (which is important too, of course).
Plus, the headphone mix thing… didn’t turn out so well, I don’t have the mental bandwitdh to create an interesting performance if I’m listening to what I will cue up next.
Since then, I got a chance to set up my MIDI controller again, and I’ve started using a graphics tablet as my main display. Of course, I didn’t acquire it for music, but half the fun of the whole multimedia artist racket is using the wrong tools for the job. I posted a video where I use it to perform a song in my participation to VCP-44.
So I’ve been making a new setup, still very focused on live generative jams, that makes use of Reason, and plays to the specific strengths of both racks. Reason mostly provides the effects, the MIDI processing, the drum sounds, and the MIDI clock. VCV provides the rest.
One stylus button swaps the left click to a middle click using Autohotkey, the other button swaps between the two racks. The AHK script is ultra-specific to my needs, but someone somewhere might find this snippet useful: it makes VCV borderless fullscreen rather than true fullscreen, which makes alt-tabbing out of it instant instead of taking a second or so:
#if WinActive("ahk_exe Rack.exe") F11:: WinSet, Style, ^0xC00000 WinMove, , , -10, -10, 1945, 1110 ; Edit to fit your resolution return
The general setup idea is as follow:
It’s really cool since I started sharing my experiences here, 4 people have implemented things in the VCV world based on my suggestions, or problems I mentioned encountering. It encourages me to push my complex rigs to their limit. Hopefully someone will get something useful out of my documenting my experiments.
Now, to learn how to use this system and rehearse it a bit before I record anything with it.
Here’s a quick demo of the system I posted about!
Stressful Hi-Fi Clown Beats to Clown To
More of a feature showcase than a coherent song, but crafting this touch-controlled setup was really time consuming so I wanted to just press record, jam whatever, and show that it actually works now.
Ran into a lot of trouble: audio drivers not playing nice together, OBS refusing to record things, CPU consumption out of control (had to remove lots of “nice to have” devices).
I think I’ll treat the files as if they were a hardware setup: just save them as-is when I’m done playing and open them back how I left them, instead of treating them as templates. Let’s see how it works out.
Happens a lot here hehe, With direct sound driver Obs records non existent hiccups during the performance, and in Wasapi everythings sounds much quieter
Anyway nice song , definitely clowned to it Btw am i crazy or did i hear a Yoshi sample in there ?
Yeah, before swapping my drivers to Voicemeeter Banana I was used to restarting my drivers all the time… Got a trusty little .bat file that just goes
net stop audiosrv net start audiosrv
and it saw frequent action.
I’m setting up Voicemeeter to output to VB-Cable as a secondary output to be consumed by OBS, so that I can use ASIO for performance on the virtual buses. I have to force the sample rate and the buffer size, or else it seems to pick a different value each reboot. With this method OBS seems to work reliably.
More initial setup than ASIO4All but much less crashy. Worth trying out for anyone doing a similar virtual bus setup.
Standard part of my drum kit. Meticulously sampled from a genuine vintage copy of Mario Paint using professional studio recording gear.
Rather than make a big fancy announcement thread for now, I’ll just mention it into my music thread - I have descended further into patchy heck:
Just one module for now - a polyphonic split and merge. Nothing new, but the first one to offer both features in only 3HP.
It lives at https://github.com/AriaSalvatrice/AriaVCVModules and if I didn’t mess up anything it should be available in the library soon.
24 hours ago I knew nothing about modern C++ and operating Inkscape, but I tried to do everything by the book, and to do something that would have a place in my setups.
You know I like mine compact for easier performance. Feels great to play with my patch and know a few modules are my own work.
I’ll give my plugin its own thread once I have more than just one module that duplicates an existing feature.
Beautiful name and design. Love the screws!
I don’t know, that colour scheme looks dangerously close to teal.
My plugin is in the library now! Download your FREE copy today or something I guess.
I’m working on a few more modules that will be more useful than the only one I released:
Splirge (released, will add sort mode next)
Splort and Smerge (they sort the channels by voltage, and you can link orders)
Bendlet and Big Bend (pitchbend helper with advanced quantization features. No UI for Big Bend yet, Bendlet is the tiny plug and play version)
Feels nice to see and hear my own modules in my jam setup. Even if my setup integrating Reason and VCV was fun to use, the mental workload is a bit much… I’ve simplified it to use only VCV now. To keep things from getting too complex, I’m keeping it constrained to 168hp 12U. I’m treating the jam setup as if it were a physical instrument - I pick up the file where I left it last rather that treat it as a template. Right now I don’t bother recording or streaming or whatever, too much pressure if it has to sound nice rather than interesting, but here’s a random nice moment from today’s session:
Liking Splirge’s functionality and UI a lot! Kudos!!
If you ever get the urge to make a version that stacks two 4x1 merges, would it become Orca’s other heart?
Dialing the Techno Delivery Machine Y’all Need Anything?
Another live jam from my main rack.
Live, no rehearsal, some dangling cables whose purpose I forgot, we just roll with the mistakes and the weird decisions of the random voltage generator. Do I have a clue what the knobs do? Bud, one of the knobs plays random audio files in languages I don’t know I grabbed off archive.org.
Every device has to justify its continued presence in the rack, I shuffle things around all the time to fit the virtual 168hp x 12U budget. I like the idea that my rack is an uninterrupted evolving performance I pause and resume exactly where I left it off.
I might now that I see there’s a use case! But I kinda don’t want to be known as providing the plugin that’s “5 different splits and merges and literally nothing else”, might do it after I offer something with more value than a learning project.
This Song Is For My Sisters Who Need to Eat Lunch to Remain Alive
A jam I recorded before lunch, and uploaded during.
I’m kinda re-learning the hard way that just because generative music can evolve forever, it doesn’t have to, you make better techno when you repeat things a lot.
Given that I often perform using a graphics tablet, I wonder which makes for better videos - a zoomed out rack I don’t have to scroll, or one sufficiently zoomed in you can follow what I’m doing better? Patching virtual cables around makes for good spectacle, but I’m not expecting the audience to keep a mental model what’s happening, so maybe I should rearrange things to fit a zoomed out view. I don’t need to be able to read the labels to remember what the knobs and jacks do.
Anyway, this series of songs using a slowly evolving pretend hardware rack is fun. I’m a mediocre keyboardist and only use keys to sequence songs, so modular is about the only way I can be a live performer. With the emphasis on randomization, I have to play along the random voltages chosen for me, and I steer the song more than I perform it.
It’s really interesting to watch you mouse pointer search for different sounds