i appreciate this perspective, though i’ve found that most people are pretty bad at clearly articulating their intent - which is a huge driver of what i’m working on with Cheshire.
if you’ve ever gotten requirements or a PRD to start implementing against, you’ll probably have come across all sorts of ambiguous cases and terms, missing logic, implicit assumptions, etc.
pushing that sort of “intent” to generative ML systems is sort of like playing with the monkey’s paw - it might give you what you asked for, but not what you intended.
i’m hoping to provide a useful affordance in between - an intermediate representation of user intent as human readable symbolic logic - so that people can iterate and clarify and understand their own framing of intent before trying to get AI to generate something from it.