Only halfway through the latest podcast ep but my imagination is piqued. Is there a version of, like, unit testing, where the software generates some randomized states based on its internal state and unknown inputs, presents the outcome as part of a report, and asks humans to correct any erroneous assumptions or missing contingencies? IDK how to even describe what I'm thinking of but I feel like it must exist, or maybe be theoretical in the emerging AI models of creating software.
Kind of fuzz testing, I guess. My brain is going somewhere bigger with this, though, maybe I'll check in with it again once I find time to finish the podcast.