A little late to the game here, but it sounds like VCV introduced a breaking change on a point revision? Have they commented on this issue? Sounds really odd - not the kind of thing I would expect from VCV.
I’m not sure it’s a break change. It could be a bug fix. In terms of smooth audio, it shouldn’t affect anything noticeably. Likely it will prevent a lot of MinBLEP requirements. I had to fix some things myself, but yes it was a discreet state that needed ConfigSwitch() instead of smoothly drifting between states. The .json load will work the same.
So what breaks? Pokes into machine state assumed to have constant implementation?
EDIT: Out of pedantic design, assume someone makes a polymorphic map of enum to port, and pokes cable objects to fix a patch load. Then any outer reference would not know which became which routing control. So who is to blame on a “my rack is malfunctioning”, “please dedicate time to read my paid for log servicing”, “that’s bad, I’ll tell everybody now”, “what you mean you haven’t done a subtract one module (then n) for an auto load without sequencing choice autofix?”
“He says I’ve got to learn C and understand the source.” … It needs a “METAMODULE” tag … expectations of where the breaks lie?
The Rack web API already does that for the major version. So if a plugin exists for both API version 1.x and 2.x, and you call the update from Rack 2.x you’ll get the 2.x version. It would seem that might be necessary now as well for the minor version.
Exactly, “breaking” is a variable term, and perhaps too strong a word here, since the 2.3.0 ParamQuantity change (smoothing vs immediate) likely has no effect on 99.99% of plugins. I suspect that VCV deemed the change so minor that it could not have any noticeable effect, and proceeded to do the change, but it so happens that in one of our modules, there was an unforeseen side-effect.
In other words, it’s pehaps hard for VCV to know what all 3000 modules are doing, and as Squinky said, it is indeed not the kind of thing we expect from VCV, but no one’s perfect and software is complex. Perhaps @baconpaul’s solution (5-6 posts up from here) is the best one.
For the record also, it has been said before that when new Rack methods are introduced in minor revisions, if we want to use them in our code, then it is understood that users have to have at least the version of Rack containing that new code, so it’s up to developers to manage this. In the present case though, it was more than this, since to get the former hehavior, we had to use the new methods. But like I said, Paul proposed an intersting workaround that I will explore.
I don’t disagree much and it’s also a matter of perspective. Is it still to be perceived as a rinky-dinky little project or as a project with a huge number of users, plugins, codelines and stakeholders? In my dayjob as enterprise developer through decades, I’ve become accutely aware of the effects of the various types of API changes, and how many vendors and developers are often blind to their effects, and this stuff can be very costly.
VCV says they follow Semantic Versioning (semver.org) and in my book it’s quite clear: If an API provider introduces something new in the API, it can render new code using it unusable in an older version of code interfacing with it. Therefor it’s always cause for a major version bump, no matter how tempting it can be to sneak seemingly innocent changes in.
And I don’t believe developers managing it for themselves is a workable strategy. They’ll use what’s in the latest thinking it will run in the older because major version API compatibility. And yes I think Paul’s strategy of “vendoring in” required changes is worth exploring. In the meantime I would certainly advice all developers to stay away from using new calls in the API within the same major version, if they can.
For myself, if I ever cave in and start developing for Rack, I’ll adopt the strategy of developing for the latest 2.x SDK but regularly compiling against the 2.0.0 SDK. If it doesn’t compile I’ll know it won’t work across 2.x versions, that I used something new that I shouldn’t use, and will rip out the call and use something else.
So far so good. I also tested it myself by reverting to Rack 2.2.3 and there we no problems, and then back in 2.3.0 and it was all good. It’s definitely a safe idea, just replicating ourselves what is in those small methods you pointed-to, with a note in my comments so that at some point in the future I can revert back to the Rack methods, just to keep things as they were intended. Thanks again Paul!
For years I’ve wanted ParamQuantity to wrap the target value of the Engine’s per-sample smoothing algorithm instead of the post-smoothed (immediate) Param::value, when smoothEnabled is true.
Previously, ParamQuantity wrapped this value itself, but the UI should almost always display/get the target value, not the immediate value. Setters are bit more complicated, since you sometimes need to set a value smoothly, and sometimes need to jump the value regardless of smoothEnabled.
So I made ParamQuantity::set/getValue() behave like set/getSmoothValue() (which was confusingly named), deprecated set/getSmoothValue() in favor of set/getValue(), and added set/getImmediateValue() which behaves like the pre-Rack 2.3.0 behavior of set/getValue().
Adding new Rack API/ABI symbols is allowed when incrementing minor versions, since Rack has followed Semantic Versioning since Rack 1.0.0. I’ve done this hundreds of times, such as adding settings::frameRateLimit in Rack 2.2.0 or APP_OS in Rack 2.0.0 to name a few.
The change seems good! I think the dust-up (now resolved by a workaround) was because the change hit a corner case in PatchMaster (one of the few plugins that needs to call setValue directly), forcing PatchMaster to go to setImmediateValue to maintain backwards compatibility in the future–a choice which would break PatchMaster on old versions.
If an API/ABI change is purely additive, then developers aren’t in the same bind–they can just wait until they think the new version has been sufficiently adopted, and then use the new functionality. There’s no pressure not to dissatisfy existing users.
But both adds and changes would be improved by something very simple and which several commenters have mentioned versions of. The minimal FR (and I can submit this formally) would be for something like an optional requireRackVersion value in the manifest that elegantly blocked plugin loading on Rack versions < requireRackVersion (possibly with a popup when loading a patch containing the module). This would:
Be very easy to implement (as opposed to having multiple plugin versions in the library, which would be more powerful but much more complicated);
Drive users to update Rack versions, which is generally beneficial;
Make plugin developers feel protected when using new API calls (and make them not feel exposed in the much rarer circumstances where there’s a change to existing behavior);
Avoid having plugin developers taking the customer service hit when plugins don’t work on new versions.
I don’t think anybody doesn’t want new API functions in new minor versions of Rack–they’re great! But not all users will update point versions immediately, and silent errors with modules disappearing isn’t good for VCV or for plugin devs.
I’m just commenting from the sidelines here–those who were actively affected by this should definitely chime in and clarify/correct.
MAJOR version when you make incompatible API changes
MINOR version when you add functionality in a backwards compatible manner
PATCH version when you make backwards compatible bug fixes
When a developer uses new API symbols it renders the plugin incompatible with older Rack versions, also within the same major version series. I would argue that’s an “incompatible API change”, as in the “1. MAJOR” clause, and therefor the new API symbols should be reserved for new major API versions. If your policy is that “users should always be on the latest version of Rack because of that” I think it should be clearly stated and documented somewhere prominent on the official website, otherwise it just causes grief.