computers are not musical instrument or it is , this depends a lot on how traditional you are, in one hand you have software from mario paint composer to lsdj and in other hand you can handle the audio of a videogame using cvs, beside to the visual experience , that is a plus
If it is a music orientated video game then sure this would be relevant!
even if is not music oriented Coirt, for instance, you can loop the audio of a game sequence (mostly have music) simply loading a game state (game state in game emulation is a moment of the video game that can be loaded ) you can save a number of game states and trigger it in different moment of a song and that is just some of the most basic ideas, what about change the speed of the videogame (some emulator have this feature), different pitches, I m not sure if other features like reverse could be added…
edited this is a sample of simply loop
You can see video games as a kind of sequencer. We have Unless games Piong already (and I need not link my work with that here for the 100th time) and that can lead to some interesting ideas. We have a lot of other sequencers that are less overtly game related but still are not expressions of traditional ideas about composition and I see video games as another form of that.
We’re getting dangerously off-topic, but in addition to games explicitly about participating in the music process (Mario Paint, Otocky, SimTunes, Electroplankton, Vib Ribbon, Rez, and hundreds of rhythm games), there’s things like the Automatic Mario Sequencer fad a decade ago, using romhack level editors to treat the deterministic nature of Super Mario World’s physics as a musical instrument for remixing.
It’s the kind of mad science that feels exactly at home in VCV, but it can only happen in a culture of experimentation with repurposed tools, where you can share modules without having to demonstrate a priori they are relevant and appropriate to a curator, which is thankfully currently the case.
A Video Monitor displays a video sent over a polyphonic cable. There would be some kind of bus standard defining the stream of pixels divided into frames. One possibility would be to have x, y, r, g b coordinates defining a stream of pixels, and a gate indicating that the frame is done.
Other modules could be plugged into the Video Monitor and send video signals to it.
At 30 frames per second, you get to have 1466 pixels per frame. Hmm… maybe the bus should send more pixels in parallel? Or perhaps it’s more like an LED grid display?
Hi @George_Locke ! I just posted a sample based granulator to the development forums for beta testing. Here are some videos of it. It doesn’t sound quite like the BubbleBlower in audiomulch, but it’s in the ballpark, maybe?
This is my first dip into programming granular synthesis and I love it! If there are any specific algorithms that you’re looking for, let me know and I’ll consider creating a module for it!
Anyone remember Electroplankton? Good stuff!
Reminds me of what you can do with JW-modules Bouncy Balls…
yes i remember spending hours on it ! Thanks for reminding me ^^ I loved the lomulioop one, it was my introduction to drone music
these last posts are very dangerous for my social life…
Inspired by all the game stuff, FABRIKsound: Forward And Backward Reaching Inverse Kinematics (FABRIK) by Andreas Aristidou:
Excellent code inspiration & guide & video:
Create “bots” and attach sensors to joints and extremities. Use V/oct to move extremities or use mouse, joystick ,touchpad/screen. The IK system calculates all the new / next positions of the bones and joints. Register their movements, turn them into sound. Depending on the way they are attached to each other the joints movements are more or less corolated.
An extensive walk through on modeling a tube amp by Will Pirkle:
TS-309A - a performance-oriented trigger/gate sequencer: based on the capabilities of the Doepfer A-155/A-154 combo (and inspired by how Steevio uses them in live performances).
I have a hi-res UI mockup with some documentation available here (in PDF):
Rather than provide a whole lot of detail here, I ask that you view what I have up there and then I can answer questions here.
I’ve thought through the design and think what I have at this point is pretty close to having the functionality and playability I was after. I used to be an App designer/programmer but a) have no C++ experience (just a little Java) and, more importantly, b) am retired and not really up to slogging up the C++ learning curve… so, I’m basically looking for a developer to take this on and I’m happy to help in any way I can. I mainly just want to use this thing!
Particulars like its name (TS-309A), the module developer name (Dream On Modules) and the color scheme I’ve used in the mockup are all changeable, of course, to match the developer’s “look” and name, and I can provide SVG files accordingly.
You might want to have a look at “Dumbwaiter” by Holonic Systems, it has all the functionality (except the slew limiter) but with a sequence length of 8 steps.
Thanks. Yeah, I’m familiar with that one. It is not implemented to be used in a live setting, though. You cannot easily move the “active” steps of the sequence around in the way I would think should be easy/useful. It’s interesting that it cycles around if the length setting takes it past step 8 (does the A-155 do that, too?) but I find that more confusing than helpful. If, for example, you want to leave the last step at 7 but then quickly change the first step between, say, 1 and 3, you’ll see its limitations.
Also, the way the trigger switch is designed/implemented, you have to cycle through all three positions (0, 1, 2, 0, 1, 2, …) meaning, for example, going from 1 to 2 takes one click but from 2 to 1 takes two clicks. The module has no “one-shot mode”. It doesn’t send out gates. It can’t be manually stepped (or started or stopped or reset). …
A post was split to a new topic: Web app generating TouchOSC projects
cv spirograph takes a pair of attenuated inputs and turns them into radius and angle of a polar graph. Outputs XX position real time. Great for audio timbre transformation and can also function as a vca. Inputs accept cv or audio, +/- 5v internally uses outputs as new inputs.
Is there currently any way to have a plug-in for the rack and Ableton (or another DAW) that can port audio directly between a DAW and VCV?
I have a similar idea in my backlog. Convert cartesian inputs to polar outputs, and vice versa. I may get to it one of these days.