So, I'm an EUP person at heart, and this ChatGPT t...
# of-end-user-programming
d
So, I'm an EUP person at heart, and this ChatGPT thing has obviously got me thinking all over again about what programming would look like to a non-technical person. At heart, I feel it should be like they're "casting spells" over reality (or virtual reality). This tips into the area of cognitive modelling: how close the physical manifestation needs to be to be able to be abstracted up to a satisfying cognitive model that matches the human's intention. In other words, you cast a spell "make that banana green!" and it comes back a lurid dayglo green, that would be a cognitive dissonance because really, you'd expect to simply get a very unripe-looking banana. What are the elements of this formalised spell-casting, this "programming system"? You have objects (banana, this one, not all ones), attributes (green, the correct one!), a sense of time or evolution (went from yellow to green). You start to get into Roget's Thesaurus land: what are the key concepts for describing the world, our human world? Anyway, just a splat of the stuff buzzing around my head right now. Thoughts?
If you're telling ChatGPT what, say, "single-use app" you'd like to make, it's like a business person telling the techies what they want. For that, back in the 90s there was talk of requirements or specification languages. Not sure where the world is with those any more!
Some were graphical or visual, many with nodes-and-wires!
w
Lots of thoughts as you can guess. The first minor thought is that this kind of programming is a lot less like the arcane incantations of current systems and a lot more like explaining things to a person, with all the attendant difficulty there. The major thought is that, for things nontrivial, getting on the same page as another person is by no means easy. Programming then becomes project management?
d
Exactly: you're trying to be unambiguous and reach a shared cognitive model. I'm wondering about the formalisation of that in some way.
What is the common "language of thought", the modelling formalism, between humans and between humans and ChatGPT?
Very few people have been in the position of being a project, well, not manager, maybe product analyst
It's opened up a huge unexplored world. Projects just muddle by with good faith and the driver of business value. But Joe Normal wanting a single-use app is a whole new thing
g
“... what are the key concepts for describing the world, our human world? ...” I would suspect that the key concepts are (1) pointing at something with a finger, (2) gazing at an object with eyeballs and conscious attention. I suspect that “make that banana green!” would be said as “make that green!”
j
@Geoffrey Litt is thinking about this on Twitter https://twitter.com/geoffreylitt/status/1637592619269214209
k
Oh nice @jonathoda, looks like I just shared the blog version of that Twitter thread.
w
I'll comment there then...
k
I'd say there are two aspects that need to be addressed in translating a plain-language description of a program/app into an implementation: 1. Formalization, i.e. adding precision and dealing with corner cases. 2. Mapping it on the capacities of an existing machine. It's easy to image full automation of 2., although we don't quite have it. Will LLM's help with that, in a sufficiently reliable way to be useful? Time will tell. It's 1. that is really interesting. It requires adding information to the initial informal description. For common problems with known solutions, LLMs could fill that in. But will they be sufficiently aware of their own limits to say "I can't do this"? Time will tell.
d
Right .. so for clarity, the question I'm pondering is about ChatGPT/LLMs that produce assembly code: or WASM or whatever - the output can be 100% opaque. The conversation itself is the "programming" and the "language". So I think where I'm going with these thoughts is, is it enough to simply have a chat history iterating towards an English (etc) language description of the program under evolution? What amount of formalisation, and precision, will now be needed after all? Could it be that we end up iterating in natural language over a, I don't know, graphical representation?
So I think I'm settling on something like: you'll iterate with ChatGPT/LLM on a UI or spreadsheet-like data model, so yes, a visual/graphical "formalism" - in end-users terms, something to give visual structure to the evolving natlang convo. That will involve representing cognitive elements in a consistent way - so you have basic values (basically, strings), then structures like lists and maps or object-like types, then some concept of change or evolution. You'd want these elemental concepts to be represented consistently between chat "app evolution" sessions. All shown as widgets or cells or tables or trees or even ... nodes and wires!
Link to @Geoffrey Litt (apparently he's not been on here a while) blog article: https://www.geoffreylitt.com/2023/03/25/llm-end-user-programming.html#opening-up-the-programming-bottleneck .. which is hitting the exact thought space my head is spinning within right now I'm basically all ashook: I didn't expect useful AI programming so soon, or at all, in my lifetime
k
Iterating towards formalization in a dialogue sounds just right for EUP. But is GPT-4 up to it? It requires a lot of short-term memory, to hold the entire conversation. Plus a decent competence in formalization, which GPT-4 may or may not have picked up from its text corpus (which is the main unknown for all of us who don't work for OpenAI).
d
I bet the openai folk are also baffled how their creation thinks
k
I suppose you are right. But they are the only ones who have a chance to relate observed behavior to the training corpus. How large a chance, I don't know!