I import my tracks (whether I use NYSTHI Master Recorder 1 or 4/8 Multitrack Recorder) into Reaper DAW, trim any silence off the head and tail, drop a common fade in/out, normalize everything (by the loudest track), and then export as a FLAC or wav…
Same here, but using Audition
I’ve been refining my process over time but right now I record out using the Nysthi 4/8 track recorder then explode the file into tracks in Reaper. Occasionally I’ll use Rearoute and send each channel direct to Reaper.
Then I’ll set the project tempo to match the BPM used in the patch and move the items so that they match the click and trim any excess from either end (I guess this is only really useful for adding tempo sync’d effects).
From there its pretty much a mix project. I’ll work on the levels and panning, treat each track with EQ, check for harsh frequencies and add compression, saturation or other tweaks if they’re needed. Some arrangement happens here if I feel the flow isn’t working too. I usually add effects tracks and route some of the parts through those if I have an idea or feel its lacking something. Usually a nice long reverb comes into the equation somewhere.
When I’m happy with it, I’ll render it out to a mastering project and work on that till it sounds as good as I’m capable of making it sound.
Probably more explanation than is needed but that’s where I’m at
So Ray, you just leave everything dry in Rack, record separate tracks, and treat with effects(including eq, compression, reverb) in the daw later? I always have a hard time listening to dry tracks while im creating, and don’t want to have to shape my sound twice. I usually just do some minor tweaks with Ozone in a daw to one master track I recorded in Rack, but realize that greatly limits my flexibility later. Maybe I should start using Host-fx more so I still get that daw-like flexibility of treating each track separately, and won’t have to ever leave rack other than to chop & fade the ends of a recording, Definitely something that’s perplexed me for a while how to handle that situation, so I just avoid it for now while I just have fun experimenting with sound.
Oh no, I always have some effects in the rack but I don’t worry about EQ or compression or anything else that I know I can tweak in the mix/master process - unless I’m on the hunt for a particular sound.
I treat each patch as if it were for a live performance (I’ve never performed live) and aim for the sound and feel I’m after but with the knowledge that It’ll be finished in Reaper, this lets me keep the CPU happy by focusing on the modules I really need.
I’ve played with host-fx but for my purposes I tend to keep vst effects out of the rack. There’s no technical benefit to this AFAIK, it just tends to suit my workflow better to not be juggling VST windows while I’m patching (I only have one monitor).
EDIT: I’d just like to clarify that I’m a complete amateur. Reading my posts back gives more of the impression that I know what I’m doing than I actually do!
I usually record with Nysthi Master Recorder, then use Reaper DAW. crop to the proper length. Add fade in/out to taste. Normalize. I usually use a compressor on the master track and maybe EQ if some adjustment seems appropriate.
Hm. The recorder modules are all closed source right now aren’t they?
Very similarly, I trim the silences, fades if needed. I don’t normalise, but I usually do basic levelling, and a bit of EQ and compression. Then exporting as wav to upload to Soundcloud. The recording itself has so far always been “one take” performanceof a full patch. But I’m looking into pushing myself towards more part composition, and then taking it to the DAW to put together the pieces. Maybe get some more traditional instrumentation in on it all. Despite my love of Rack over the past year or so, I still see myself as primarily a bassist. But my basses have seen more sidelines recently. Be nice to get back to it.
I am makign dance trax so for me its utterly important that I record into my DAW and everything is synced to the project at hand. This is very difficult in the current version of RACK, mega MIDI drift so I use a 24ppnq clock sync (only took me a month to get this right). I usually record DRY and use FX in DAW unless the desired effect is an FX, in which case I go crazy in VCV. Lookin’ forward to when VCV is stable enough to use as FX host on dry tracks within my DAW, that’s the next level for me.
@Soothsayer, why is it “utterly important”? (curious)
Could you share your method here? It sounds likely to be useful to save others the same pain.
For a very long time I did nothing but just record them, though I would really struggle with levels, but now (or I will do when I actually have a working laptop again - I’ve finished nothing since November) I usually stick them through T-Racks One (lazy) or Zynaptiq Intensity (even lazier - fortunately I have a free NFR of this) in Host FX.
I’m just a dabbler though, I don’t know a great deal about mixing or mastering.
Because its dance trax, it needs to be in sync. I have had experiences where recording 5 min of audio and not being able to sync it correctly on the beat… mega midi drift issues with Bridge as well.
I could share my setup if enough people are interested. It’s a bit complex so will take an entire post, and another member sort of pioneered it so I don’t want to steal the glory but I can ask him and we can make a post together perhaps? Basically I use a DC-Coupled audio interface: Use SW SYNC or BW Clock and send a clock into “Clocked”
I’ve used the NYSTHI recorders within patches but more recently I’ve simply routed the audio out from Rack to ecasound, a terminal-based DAW for Linux that is extremely handy for jobs like this. Afterwards I edit the ecasound recordings with Magnus Hjorth’s mhWaveEdit, a lightweight powerful audio file editor that is also very handy for quick & easy work. Typical edits are trimming and normalization (all other processing is done inside Rack). For a recent patch (Strings & Arpeggio) I had to record the piece in two passes, one for the strings and bass, one for the arpeggio part. This was necessary because the patch is too much for my system, it causes too many xruns in performance. Anyway, after the recordings were made and edited as described I put both into Ardour, aligned the two tracks - Ardour is great for this job - set my track levels, added a wee touch of compression to the master mix, and there ya go.
Since sometimes I do mastering for other people - I do it for myself also. So record it via NYSTHI 2 channel recorder (24 bit of course, for more headroom), import it to ableton. Take an EQ, tape simulation or compressor, see what my spectroanalyzer or phase meter shows, then take ableton utility for setup bass mono frequency, setup “wide” knob. Then I take a limiter and also Youlean meter and look at the LUFS meter…and squash it until I will happy)
Then I take the track or album into a player or other “shit” control and listen in different environment than my monitoring setup.
48 kHz at 24 bits, that’s how we roll here at Studio D. Long ago I followed a suggestion from fellow Csounder Michael Gogins, he was right about the better sound at that sample rate and bit depth.
Still experimenting, and/or making sounds for “personal consumption”, but I do as many of you, record the .wav, and edit/mix/master in DAW or editor such as Audacity(MY go-to), albeit I’d RATHER use Wavelab or Soundforge if my budget would let me.
Another thing I like doing though, is running the wave through a “mangling process” to see what bizarre sounds and sequences I can come up with. There’s this rather old program on the web called Mammut (Mammoth) that I enjoy playing with, has some nice long stretching and other fx to it and is fun to play with. Other times, I’ll put it in a DAW or Audacity, and just throw plugins at it , or “chop” it and throw plugins at slices, to make loops and one-shots for VST’s and stuff like that.
I could share my setup if enough people are interested. It’s a bit complex so will take an entire post,
I’m curious if it doesn’t take too much effort for you to write it up…
You didn’t really answer the question.
Is it because you produce tracks only partially with rack (as opposed to entirely within)?
What do you need to sync it with?