<@U5TCAFTD3> I'm curious what you think today of <...
# of-end-user-programming
k
@stevekrouse I'm curious what you think today of https://blog.val.town/end-programmer-programming after recent cataclysms. We all need to revisit assumptions, and I'm looking to crib ideas 😄
s
Appreciate the prompt. Just reread it to answer... The short answer is that EUP is still super far away, even with gpt4. No non programmers are using it in any serious way to make things as far as I've seen. If someone, maybe Geoff or someone from this community, can build that (a tool that makes gpt4 a eup programming environment), that would be awesome. It just hasnt been done yet. I'm also skeptical it is possible. Text isn't the right interface for end users to computation
Call me old fashioned but I still think coding is the right notation for thinking about computation, so learning it is still a bottleneck
c
@Kartik Agaram Who wrote the above post, and when was it published? (Presuming it's Steve)
s
Lol yeah it was me. My bad for no clear author name
It was published nov 8, 2022
w
Nov. 8 seems like a long time ago now. Without external validation, a non-expert can't vet GPT-4 (especially) responses because seem pretty much right. A friend just shared a Chat GPT-4 dialog he had full of both Wikipedia accurate details mingled with reasonable inventions that Chat had no problem justifying (with convincing, though false) evidence. When it comes to facts, yes, this number one problem facing these systems is being addressed. When it comes to models/programs, we have the exciting FoC challenge of having systems present/explain/represent their inventions so that end-users can tell whether they are fit-for-purpose.
k
An app/tool that makes its runtime accessible might be easier for an LLM to modify just like it's easier for humans to modify.
s
Yes yes a million times yes
But figuring out that relationship is the trillion dollar question
k
It doesn't seem that hard? Val Town might actually be a great start already. To rephrase my hypothesis: an accessible runtime is all LLM needs.
k
Wondering if "accessible runtime" for an LLM means "runtime with a plain-language interface documentation". That hypothesis should be easy to test.
k
Based on the contrasts in OP I believe it's just the absence of a build/deploy step.
s
I think this twitter thread gets at what I mean by how ultimately LLMs don't fundamentally change how non-programmers code, and I don't think they will anytime soon because the underlying fundamentals are similar to how hard it is to get full autonomous self driving cars to work. The long tail is super long! That's not to say I don't think it's a game changer and should be heavily investigated in programming environments. I use them all the time personally. What I'm saying is that they make a programmer a superprogrammer, but they don't make an end user a programmer
k
That is compelling.
j
I agree. I don’t see how current AI gets beyond “single serving” software. But of course I would believe that, because otherwise I’ve spent a good part of my life on a dead end. The truth is that no one knows. A lot of smart young people are pouring into AI now. And who knows what GPT-5, 6, … will be capable of? Interesting times …
p
@jonathoda maybe that’s the thing - that single serving software takes over. Why learn a general tool when the AI can give you a customized tool each time
k
Tailor-made software would actually be huge progress in many use cases. But unless it is very simple (i.e. the smartphone app for managing a shopping list), or absolutely uncritical in details (i.e. a game), AI-made custom software raises the question of verification. Most of today's software gets verified continuously by lots of users. Complex software with a handful of users is unverifiable. Which raises questions of reliability and then responsibility.
t
both val.town and darklang want to improve the situation by removing build/deploy phase. But I think get rid of build/deploy is not the answer, get rid of arbitrary complex data structure is. bash => powershell => python => c, professional language is too powerful to allow complex abstraction. To make things easy, we need to make easy things. The requirement should be concrete with concrete solution, instead of trying to make abstraction done easily.
think about the map data structure: map[some_var_key] = any_type_obj. there is no such thing in SQL. It is not possible to have dynamic column name in SQL update https://stackoverflow.com/questions/12846743/dynamic-update-statement-with-variable-column-names
what LLM enabled is low cost production of disposable software, which is the first time, we can shoot for cheap concrete solution, instead of trying to make expensive but generic saas.
k
Indeed, LLM-generate code whose complexity is just above Alexa commands could become cheap and yet safe-to-use. ShellGPT (https://github.com/mattvr/ShellGPT) goes in that direction.