What tool are you using under the hood to do the model finding and indicate if the constraints hold?this is built using a very small part of z3; if we hit
we permute the set of constraints progressively removing more and more of them until we hit some results -- is that what you were asking?
the presentation of those three tools suggests that some common underlying technology is a commodity not worth discussing any more: the recognition of hand-drawn objects and their subsequent manipulation by code. Is that true? Are there well-understood algorithms for this? Maybe even ready-made implementations?to the contrary; there's pretty much no "recognition" happening in any of the demos (the very small exception is the delta symbol and star symbol recognition in the "sketchy charts" part) -- everything else happens by spatial relations from known objects -- this is definitely a big technical challenge, especially since "AI" doesn't vibe well with what we're after (it's an unpredictable black-box)
Such as recognizing a box or a tick mark as an object distinct from the other pen strokes on the page, and then moving it around.yes, this is what I was trying to answer; we don't do that, and I don't think this is really possible to be 100% perfect -- for example a box can be drawn in 1,2,3,4 strokes (or more if you're sloppy), and then it's easy to make one that's slightly rotated, or skewed, or.... - I don't believe there's a heuristic that can cover all of that, and work out all the time -> so one direction from here is to do the best we can and accept that there can be errors/miscategorizations (which I think is not great -- because then you have to adjust how you draw things to make the computer understand you, and you're one step closer to just filling forms again), and another direction (that I'm more interested in) is to figure out different systemic approach to the problem so 'recognition' is not needed