https://futureofcoding.org/ logo
Title
I initially resonated with the tweet but I'm wondering if it's actually a fallacy hidden in a massive increase in the number of "programmers" since 1960s, and a huge increase in the complexity of the programs without any increase in the skill of the programmer (arguable a decrease in the skill)
I think there's a "God of the gaps"/"AI is what we can't do yet" thing here. In 2022, "normal" technical people can go into Games Dev as a trade, in a few years be putting out computer programs like Call of Duty which are impossible to write in 60s PLs
In 2022
March and I'm still doing this 🙄. Can't believe nobody's rolled out a feature like Gmail's "You said 'see attached' but there's no attachment" yet for Q1 every year
l

Lu Wilson

03/04/2023, 10:52 AM
I think it's a funny one because as soon as a "non-programmer" starts making programs, I'd call them a "programmer". Can't imagine what a "non-programming programmer" actually is :) or perhaps a "programming non-programmer"
s

Srini K

03/04/2023, 11:00 PM
I cook lots of mediocre meals for myself in my little kitchen at home, but I don’t call myself a chef or a cook
a

Andrew F

03/05/2023, 1:12 AM
"A programmer" is probably best understood as referring to someone whose primary occupation and training is the creation of software. The question then becomes, how low can we drive the training requirements so that people mostly trained in/occupied with other stuff, say cooking, can produce software useful to them "on their own" (without having to pay someone for custom work, that is; let's not think to hard about an airtight definition). There's an even messier sense of "a programmer" which is roughly "person who write things in text files that a computer interprets" which is possibly closer to what's meant in the "let non-programmers program" type of goal statement. However this category becomes less meaningful if those goals are satisfied, for roughly the reason @Lu Wilson said, and has little inherent connection with the first definition, except as a function of our current tools which are still, in actual use, almost entirely about editing text files... ...mainly because successfully programing by editing text files takes a lot of training. Hmm.
k

Konrad Hinsen

03/05/2023, 9:03 AM
Reminds me of one of my all-time favorite articles: "Beyond programming languages" by Terry Wingrad (1979). https://dl.acm.org/doi/10.1145/359131.359133 It's all about working productively with computers as part of some other activity, i.e. computers and software not being the focus of attention. And I doubt this will ever be achieved with a single idea or tool. It will require domain-specific approaches.
l

Lu Wilson

03/05/2023, 9:44 AM
A programmer is someone who makes a program. eg: A 6-year-old using Scratch. EDIT: Sorry I've just realised that a 6-year-old using Scratch isn't actually a programmer - because they're not writing any text files that the computer interprets. They're actually a non-programmer. A better example of a programmer would be a 6-year-old using google docs. EDIT: Sorry I'm just being facetious. But I do think that the definitions used around "programmer" are often more for gate-keeping or to elicit a reaction, rather than anything else - like in the referenced tweet :) The idea of what a "programmer" is has changed over time and will continue to change, and they'll always be some resistance to that. Just like with any sort of change
j

Joakim Ahnfelt-Rønne

03/05/2023, 9:10 PM
A lot of spreadsheet users do a bit of programming in the manner of formulas. I think that kind of programming is about to become even more commonplace with the latest improvements in AI.
s

Scott Anderson

03/06/2023, 12:39 AM
In 2022, "normal" technical people can go into Games Dev as a trade, in a few years be putting out computer programs like Call of Duty which are impossible to write in 60s PLs
Although a normal technical person can go into game dev, Call of Duty is a terrible example of what is achievable with a few years of game programming knowledge. Also Call of Duty is primarily written in a C like dialect of ansi C, not some high level 4GL. Now someone could download Unreal or Unity and make a first person shooter that trivially resembles a poor reconstruction of Call of Duty. But that's like saying I can make Facebook or Twitter in a weekend with node.js and react
i

Ibro

03/06/2023, 12:41 PM
I cook lots of mediocre meals for myself in my little kitchen at home, but I don’t call myself a chef or a cook
https://www.robinsloan.com/notes/home-cooked-app/ :) A particularly relevant point from this was programs, which are possible now because of existing open source software. Availability of tools also increases as cost to make them drops.
c

Chris Knott

03/07/2023, 7:32 AM
@Scott Anderson Yes I didn't mean to suggest 1 person could make a AAA game on their own, but that they could positively contribute to one (I know this because I learnt C++ as an intern at a games dev working on a AAA game!).
d

Dawa Sherpa

03/09/2023, 2:30 PM
I am beginning to believe that with AI, it might be possible to let non-programmers make software. Until now, most software was imperative and needed to be managed by knowledgable human. With what I am seeing with LLM such as chatGPT, we might be entering declarative software building and not worry about the internals. In fact, I am going to experiment if I can create few ideas I have in my mind using declarative approach. Would love if anyone can share examples of such attempts already. Declarative approach with AI to me is significantly different than no-coding platform, because no-coding platforms have pre-built lego pieces and will have limited configuration potential. But with AI based system what if we can build the lego pieces to our description on demand?
i

Ivan Lugo

03/09/2023, 7:07 PM
Elsewhere has been mentioned (please credit whoever said this!) that this LLM stuff is a “universal coupler”. You can ask for an amalgamation of any set of arbitrary things, and something will come back, which can be iterated on more. This means that, just like people got used to refining their search terms, they can learn to refine their requests to get what they want, and like you said, without needing to learn the syntax for the computer-language to get it done. That’s huge. I give it maybe a year or two, at the most, until prompting for code becomes some on-premise, logged, reproducible, and shareable just like any other media asset. Generating code and sharing the prompt + output will be the new wave of the .gist, and there will be a major shift to those who can accurately describe a need over those who can accurately translate that need into a specific language. We current devs will still be relevant: we are trained and disciplined on specifying requirements and understanding what things to ask for in a program or algorithm. Id be a very happy person if my skills gradually transition to helping others understand why their requests aren’t getting what they want, and writing those prompt-requests for others just like writing a software package.
a

Andrew F

03/09/2023, 7:15 PM
This property:
You can ask for an amalgamation of any set of arbitrary things, and something will come back, which can be iterated on more.
... does not imply what you think it does, IMO. It's arguably the biggest weakness of the current generation of LLMs.
i

Ivan Lugo

03/09/2023, 7:25 PM
Well, let me be a bit more descriptive to see if we’re on the same page. When I talk to you, your brain is interpreting all the words I’m writing, filtering them through your personal perception, and then you respond. Sometimes directly with more written language, sometimes through another medium - body language, another form of media, etc. Even though I can ask an LLM for code that renders a purple elephant with a funny hat, it doesn’t mean I’m going to get it. But, whatever comes back as a result (unless it’s a complete failure to recognize a sentence at all), is a close interpretation of the desire. Just by the sake of the words in that particular order produces a set of related vectors. After that though, it’s up to the person to do something with it. So, it’s certainly not an Oracle, but it’s a great listener and is quite patient. And pragmatic, too. For us, we can do stuff like code completion, fuzzy searching that may or may not be accurate (oops ;)), even writing whole articles about a topic. I gave the prompt box to a family member, and they immediately started using it like a search engine to understand the meaning and context of things (ham radio stuff in this case - very simple definition type of stuff). That same algorithm can take my, “here’s a code snippet, please make it draw a snippet” and then do something with it. I kinda consider that a universal coupler. And you’re right, though, insofar as all of this only works because everything is translated to a word weight and relationship edges. It’s up to the strength of the algorithms that define the relationships to deliver meaningful responses. Which kinda goes back to how people prompt it, and whether or not their style of speech approximates the global averages constructed by the model. This is all top of my head here because I’m really in lust with this stuff, and I’m procrastinating diving into the implementations. These are my surface thoughts that likely are curtailed with more understanding, but I’ll tell ya.. even as a n00b, you can make these things say and so some crazy stuff.
g

Gabriel Grinberg

03/09/2023, 8:53 PM
@Dawa Sherpa I'm planning to add something like that to Flyde (https://www.flyde.dev) soon I believe the mix of visual flow-based programming, with the ability of creating new nodes from textual prompts can a perfect fit! Let's chat if you find this interesting 🙂
a

Andrew F

03/09/2023, 10:56 PM
The difference I'm getting at is that when given a question, I as a human can say "I don't know" or "that sounds like nonsense to me" if the connection between the things you're trying to "couple" is too tenuous. There are many cases where that's the only appropriate answer. An LLM is more or less incapable of giving it. And while you can say that people need to look critically at the output, this will absolutely not happen at scale, given an AI that works well enough most of the time to lull people into complacency. The problem will get really wicked if LLMs make another couple jumps in ability without solving this basic problem; the BS that sneaks through will be subtler, take longer to detect in practice, and (partly for that reason) bite harder when it does.
d

Dawa Sherpa

03/09/2023, 11:20 PM
Yes, @Gabriel Grinberg would be interested to chat