I've made the problem harder for myself because I ...
# thinking-together
k
I've made the problem harder for myself because I want all programming to involve tests. (We've chatted about this before, Chris: https://news.ycombinator.com/item?id=10073578#10075306) So far in my experience, teaching the value of tests to non-programmers is still really hard/unproven. I can be convinced it isn't essential. Maybe non-programmers don't need to learn to write tests, but they may need to deal with tests others have written?
i
To take a slightly contrarian view here - I don't think tests are what matters, you want to teach people to add constraints to their systems. Tests are just there to expose violations of those constraints and an automated system can find that much better than we ever can 🙂
💡 4
the last version of Eve had some really cool early work on this
k
Squinting through my memory, adding constraints also feels hard for non-programmers in the same way as tests. It might be the essential irreducible core of what makes tests hard..
i
It could just straight up tell you the ways in which your program could fail
Yeah, it definitely is hard
I suspect there are lots of fun ways to use contextual information to make it easier though.
💡 1
e.g. based on how some information is going to be used, you can tell a good deal about what "reasonable" probably looks like
k
In my prototype you could click on a result in the REPL to make it green. But the feedback on clicking to make it green was delayed so I could never quite convince people to think about when to do it. You're right, tools can be smarter here.
c
I think the ultimate goal would be a kind of "execution diff" where once you change the code it shows you all the ways you changed the behaviour. Staggeringly hard to implement I imagine.
👍 2
In fact almost certainly equivalent to the halting problem 🤔
i
depending on your semantics you can do that
1
there are also steps short of perfect there that are great: what were the actual differences in this execution
g
hmm does anyone know of any systems that merge programming by example, testing, types and abstraction into one flow? i.e. 1. start with example data, 2. transform that specific of data broaden the possible input data to more examples 3. expand examples to a range of inputs 4. expand possible transformations by looking at particular transformations as another example so that you're constructing greater power as you go instead of starting with infinite possibilities and remembering to constrain them?
💡 1
more concretely (but very contrived)- 1. start with [1,2,3,4]. 2. show that many duck pictures per row 3. try [0,4,1,2] 4. try for all numbers 5. pull out "show me ducks" so that it can be another transformation