Using the GPU to process the feed is really interesting given how VCV is so CPU-bound, I wonder if the logic could be entirely prototyped in web tech and ported to VCV, both projects being different frontends to the same algorithm. I happen to dabble in 3D stuff and to use coffeescript for personal projects, so I’m definitely going to look under the hood and try to tweak stuff some!
the shader part is easily reusable between web/native, and it would be best anyways to move more of the logic into the shader (the need for looking up the 3x3 pixel render isn’t ideal now, but I have to spend more time with shaders to write a proper selective bloom/blur like thing that would color those 1 pixel renders with greater certainty about the cell’s state)
I’ve tried messing a bit with the opengl widget inside rack before and it seemed functional enough for all of this, except for the lack of camera input and some 3d convenience stuff that three.js provides (scene graph, material/geometry handling etc).
prototyping this sort of thing in a browser is a good fit for sure!
And as we saw, OpenCV webcam input works on Rack 0.x in OS X, so we have all the pieces of the puzzle in place
added a grayscale detector mode, that might work better with your marbles! (also cleaned up the code a bit)