https://futureofcoding.org/ logo
Docs
Join the conversationJoin Slack
Channels
administrivia
announcements
bot-dark-forest
devlog-together
in-alberta
in-boston
in-germany
in-israel
in-london
in-nyc
in-ontario
in-seattle
in-sf
in-socal
introduce-yourself
linking-together
of-end-user-programming
of-functional-programming
of-graphics
of-logic-programming
of-music
present-company
random-encounters
reading-together
share-your-work
thinking-together
two-minute-week
wormholes
Powered by Linen
thinking-together
  • n

    Nilesh Trivedi

    12/29/2022, 12:59 PM
    It seems to me that the whole construct of functions as primitives that take arguments and produce results leads to boilerplate and duplication because it needlessly privileges arguments as "independent variables". I'll use an example to elucidate: Take a rectangle of base
    b
    and height
    h
    . It's common to think of these as "independent" variables and define quantities like the following as "dependent": •
    perimeter(b, h) => 2*(b+h)
    •
    area(b, h) => b*h
    •
    diagonal(b, h) => sqrt(b^2+h^2)
    But if I were to ask, what are the base and height of a rectangle whose area is
    a
    and diagonal is
    d
    , our programming languages have no tooling to do this (except the symbolic manipulation libraries for computer algebra). All the required information is there but we have privileged
    b
    and
    h
    over
    area
    and
    diagonal
    and thus, we now need to figure out the formula for side lengths and program it as the function
    sides(a, d)
    I should be able to describe a structure, and auto-generate all possibilities functions (including for currying and partial applications, and while we are at it, all the partial derivatives with respect to each other) so that i can just declare what is known and what I want to calculate. A rectangle can then be represented with any set of variables that make everything else determinable. I should get access to all possible constructors like
    new Rectangle(area: a, diagonal: d)
    . And I want to see this be available for all programming tasks, not just algebra/math. For creating a graph, is the constructor
    new Graph(Node[], Edges[])
    really the privileged one? Why not build languages in a way that I automatically get
    new Graph(AdjacencyMatrix)
    and
    graph.getNodes()
    and
    graph.getEdges()
    .
    l
    g
    +7
    • 10
    • 15
  • n

    Nick Smith

    12/31/2022, 7:20 AM
    Traditional models of communication between devices, processes, and threads include message-passing, remote procedure calls, and shared memory. Here's a model I haven't seen before: shared game-playing. How it would work: • The rules for a "game" of some kind are expressed as a code library. • A set of "players" (processes or threads) express interest in playing the game with each other (somehow...). • The players communicate with each other by interacting with the game (in accordance with its rules), and receive information about each other's actions by observing how the game state has changed. The "game" could be an actual game like chess or Factorio (implemented via peer-to-peer communication), or it could be a standardized protocol like HTTP, FTP, or (most commonly) it could be an application-specific protocol that would normally be implemented via message-passing or RPC. Imagine if this were the only model of communication that a programming language exposes. What if it were the "building block" of communication — the only way to build concurrent systems? I think it's an intriguing thought 🤔. I'm surprised I haven't heard this model proposed before. (This post was inspired by Syndicate, which is an actor-based PL that eschews message-passing and RPC for the idea of a "data-space" that actors use to exchange information. But unlike my proposal above, Syndicate's data-spaces don't contain rules, and thus cannot be used to model video games or communication protocols.)
    m
    i
    +4
    • 7
    • 38
  • a

    Alex Cruise

    01/03/2023, 8:20 PM
    I wonder if any of you will have better luck than me tracking this down: https://twitter.com/atax1a/status/1610330977108385795
  • a

    Alex Cruise

    01/03/2023, 8:21 PM
    (warning: I’m usually pretty good at googling, and I’ve found nothing)
  • k

    Konrad Hinsen

    01/06/2023, 10:55 AM
    Inspired by https://futureofcoding.slack.com/archives/C5U3SEW6A/p1672756060085649 and https://futureofcoding.slack.com/archives/C5U3SEW6A/p1672816690782479, plus my daily work with a Smalltalk system, I started thinking about high-level architectures of information processing systems. Spreadsheets are two-layer systems, with a data grid on top and a code grid below it. That's a good architecture for dealing with heterogeneous grid-shaped data and shallow computation. For homogeneous grid-shaped data (arrays, data frames) you'd prefer to compute with the grid as a whole, and for complex/deep computation, you want a more code-centric architecture. You can of course prepare the complex code in some other architecture and just call it from a spreadsheet. High-level architectures can be composed. Dataflow graphs, of which Data Rabbit is an amazing implementation, have nodes containing code and data flowing through the edges. They can deal with irregularly shaped data, even messy data, but, like spreadsheets, they are limited to shallow computation. A Smalltalk image is a code database built on top of a an unstructured "object lake". It's great for dealing with complex code, but has no high-level structure for data. You can, and have to, roll your own. From this point of view, a Smalltalk image is the perfect complement to a spreadsheet or a data flow graph, having opposite strengths and weaknesses. So... are they more such high-level structures that have proven useful in practice? Is there just a small set whose elements can be combined, or should we expect a large number of unrelated architectures being good for specific purposes? Note that I am thinking about "good for", not "applicable to". All Turing-complete systems are equivalent in principle, but that doesn't make them good tools for all purposes. My question is ultimately one of human-computer interaction.
    d
    g
    +3
    • 6
    • 10
  • s

    Steve Dekorte

    01/20/2023, 7:08 PM
    "Are there any languages with transactions as a first-class concept?" https://www.reddit.com/r/ProgrammingLanguages/comments/10gylhm/are_there_any_languages_with_transactions_as_a/ Would be interested to hear the thoughts of folks here on this thread.
    a
    m
    +5
    • 8
    • 10
  • g

    guitarvydas

    01/22/2023, 2:06 PM
    # Summary 2022 For me 2022 was: 1. 0D 2. transpiler pipelines Explanations below. There is nothing “new” here. I believe that our crop of programming languages subtly discourages certain kinds of thoughts. You can do these things with our programming languages, but, you don’t bother. [I wrote this on Jan 1/2023. Then I promptly got sick and found new ways to procrastinate. I will gladly remove this if it is inappropriate or too long...] # TL;DR ## 0D • 0D is part of traditional parallelism (zero-dependency, total decoupling) • breaking 0D away from parallelism enables other uses • 0D uses FIFOs, whereas functions use LIFOs (LIFOs are used by most modern programming languages, Python, Rust, etc. and stifle possible solutions) ## Transpiler Pipelines • “skip over” uninteresting bits of syntax, whereas CFG requires full language specification • leads to a different class of tools -> parser used for “quickie” matches instead of for building compilers ; different way of using parser DSLs ; like mathematical manipulation of notation • “skipping over” bits of syntax allows syntactic composition ; syntactic composition enables pipelines ; # 0D 0D is a short-hand for the phrase zero dependency. Total decoupling. Programmers already know how to write 0D code, but, they tangle this simple concept up with other concepts and call the result “parallelism”. At a very, very basic level, you can achieve 0D by using FIFOs instead of LIFOs (queues vs stacks). LIFOs - callstacks - are good for expressing synchronous code. LIFOs are less-good for expressing asynchronous code. Programmers often conflate nested, recursive functions with the notion of pipelines. If a component sends itself a message, the message is queued up in FIFO order and there is a “delay” before the message is processed, whereas if a component recursively calls itself, the function parameters are pushed onto a stack and the processing happens immediately, in LIFO order. This subtle difference in processing sequence manifests itself in design differences. For example, in electronics - where all components are asynchronous by default - you often see the use of “negative feedback”, say in Op-Amp designs. You rarely see this technique used in software design. In electronics, negative feedback is used by components to self-regulate, whereas in software, recursion is used as a form of divide and conquer. Feedback loops make it possible to be explicit about software design, whereas recursion hides the key element - the callstack - of the design. EEs had this issue sussed out, before the advent of the “everything must be synchronized” mentality. All components in an electronic circuit are asynchronous by default. Synchrony is judiciously, explicitly designed-in through the use of protocols. Synchrony is not designed-in everywhere by default and is explicitly designed in on an as needed basis. There is a reason - a subtle reason - why it is easy to draw diagrams of computer networks and not-so-easy to draw diagrams of synchronous code. In EE designs, concurrency is so cheap that you can’t help but use it. In software, concurrency implies difficulty and designers end up avoiding concurrency in their designs. This subtle difference has a trickle-down effect to end-user code. When it is difficult to draw diagrams of programs and to snap components together, programmers tend not to provide such features to end-users. Or, when they provide such features, they implement such features under duress. If DaS and snappable components were abundantly available, such features would naturally leak through to end-user apps. 0D can be implemented a lot more efficiently than by using operating system processes and IPCs. Most modern programming languages support closures (anonymous functions) and make it easy to build queue data structures. Stick one queue at the front of a closure - the “input queue” - and one queue at the tail of a closure - the “output queue” - and, you get 0D. Then, you need to write a wrapper component that routes “messages” from the output queue of one closure to the input queue of another closure. Can this concept be generalized? This ain’t rocket science. When you build 0D software components, does the order-of-operation of components matter? Nope. Can a 0D component create more than one result during its operation? Yep. Can a 0D component directly refer to another 0D component? Nope. The best you can do is to compose networks of 0D components inside of routing wrappers. # Transpiler Pipelines It would be nice to build up solutions using pipelines of many little solutions and syntaxes made expressly for those solutions. What do you need to be able to do this? 1) You need to be able to write grammars that are very, very small and that allow you to”ignore” bits of syntax that don’t pertain to a problem, e.g. kind-of like REGEX, but, better. 2) Total isolation of building blocks. ## Very Small Grammars That Ignore Uninteresting Items Ohm-JS - a derivative of PEG (Parsing Expression Grammars) - makes it possible to write grammars that skip over uninteresting bits of text. For example, if you want to write a quickie parser for C code, you might want to say:
    ... function-name (...) {...}
    In Ohm-JS, you can say this, whereas in a CFG-based parser generator you need to over-specify all of the niggly bits of C syntax. In Ohm-JS, this results in a few minutes of work and only a few lines of code. The Ohm-Editor assists in developing the micro-grammar. In YACC and CFG-based approaches, though, you’re looking at a gargantuan job (days, weeks, months, ...) and you simply don’t bother to write such a quickie parser. You either don’t bother with the whole idea, or you use something like REGEX which fails on a number of edge-cases for this kind of thing. REGEX can’t search recursively for matching brackets, Ohm-JS can. Using REGEX, you might get away with a partial solution, or, the project might grow larger as you hit unexpected speed bumps. You either persevere or you just give up. For the record, the grammar plus the accompanying code fabricator specification for the above simple example are shown in the appendix. ### DaS Comes For Free When you can build totally isolated building blocks, you can draw sensible diagrams of how the building blocks should be snapped together to solve a problem. Later, you can steal (cut/copy/paste) chunks of previous solutions and use them as building blocks for new problems. DaS: Diagrams as Syntax. DaS is not diagrams as an Art Form. DaS is diagrams as programming languages. For example, instead of writing
    {...}
    , you draw a rectangle. Programming languages were created by culling the English language and by choosing only the words and phrases that could be compiled to executable code. Can we cull diagrams in the same way to invent new programming languages? EE’s have done this and they call the resulting diagrams “schematics”. Building construction engineers have done this and call the resulting diagrams “blueprints”. ## Don’t We Already Use Building Blocks? “Code Libraries” look like building blocks, but, contain subtle bits of coupling that discourage building-block-iness. For example, the very common idiom of a function call
    f(x)
    introduces at least 3 kinds of coupling: 1. The name
    f
    is hard-wired into the caller’s code. The calling code cannot be cut/copy/pasted into some other solution without also dragging in the called code, or, by futzing with the source code. 2. The function call
    f(x)
    waits for the callee to return a value. This is also known as blocking. Function call notation works fine on paper, where functions can be evaluated instantaneously. It’s different when you map function call syntax onto hardware that has propagation delays wherein functions take finite amounts of time to “run”. This subtle difference in behaviour leads to hidden gotchas. A glaring example of the impact of such a difference can be seen in the Mars Pathfinder disaster[^pathfinder]. 3. The function return
    v = f(x)
    hard-wires a routing decision into the callee’s code. The callee must direct its response back to the caller. This is called “returning a value”. Again, this doesn’t look like a problem when you just want to build fancier calculators, but, this hard-wired routing decision discourages simple solutions to non-calculator problems, like machine control. [^pathfinder]: https://www.rapitasystems.com/blog/what-really-happened-software-mars-pathfinder-spacecraft When you don’t have complete isolation, you don’t have building blocks. Imagine a LEGO® set where all the pieces are joined together with a single, long sewing thread glued to each LEGO® block. Or, you have two real-world objects, e.g. one apple and one orange. You cut the apple in half. What happens to the orange? As humans, we are used to the idea that objects are completely isolated. Programs don’t work that way. We have to stop and think hard when writing programs.
    k
    v
    +2
    • 5
    • 31
  • n

    Nick Arner

    01/25/2023, 12:44 AM
    Does anyone have any resources they’d recommend for learning about Rust/WASM?
    l
    c
    +2
    • 5
    • 6
  • j

    Jonas

    01/28/2023, 6:51 PM
    Quick question: is there an RSS feed for the Future of Coding newsletter?
    m
    • 2
    • 5
  • i

    Ivan Reese

    01/28/2023, 9:43 PM
    ^ The exchange in that thread is why @Mariano Guerra is the GOAT
  • s

    Steve Dekorte

    02/01/2023, 8:38 PM
    Should a good tool/framework/language not only "make easy things easy, and hard things possible", but also (generally) make patterns effortless, and anti-patterns painful?
    k
    j
    +7
    • 10
    • 20
  • m

    Mariano Guerra

    02/03/2023, 3:47 PM
    What are some great ways to present code in a book/docs? Especially when the code is growing, being modified, etc.? https://twitter.com/dubroy/status/1621533688159768577
    a
    b
    j
    • 4
    • 3
  • j

    Jim Meyer

    02/09/2023, 11:34 AM
    Code is a weird medium. It can act directly upon the world at scale. The only other "things" that can do that are the fundamental forces of the universe. Code is essentially a kind of Jinn/Genie: Agency in a bottle. Jinn also translates to "beings that are concealed from the senses" [1]. Invisible beings that control our world and battle for our attention. Sounds about right 😁
    a
    d
    s
    • 4
    • 4
  • k

    Kalvin

    02/16/2023, 1:04 AM
    Does anybody have any examples of version controlled projects that use visual programming?
    g
    k
    +4
    • 7
    • 14
  • i

    Ibro

    02/20/2023, 12:41 PM
    I’m curious “where” people think of visual in visual programming being. For context, I spend a lot of time in tools like Houdini, Solidworks, Cavalry, and After Effects. Some of them have more access to computation than others, but the biggest difference between those and Processing or threejs is a large “standard library” of functions. On the other hand, building a website with live feedback or scripting in a REPL seem like a very different experience from just writing the same code in notepad. I wonder if visual programming is all just “debug views” rather than the specific presence of a GUI. And if so, what does that mean for generalized visual languages or environments?
    g
    j
    +5
    • 8
    • 15
  • o

    Oleksandr Kryvonos

    02/21/2023, 10:20 AM
    I am not sure but this might be a thing - in order to reduce scrolling through files I try to keep each function in respective separate file (so I have over hundred of files so far) and I wrote a simple code that copies the content of the function into the body of the html page and adds some template text like <script> tags etc. I try to find the most minimal set of tools in the motto of bicycle for mind - in other words - you don’t need a complex solution like an aircraft carrier but rather a bicycle.
    g
    k
    l
    • 4
    • 9
  • e

    Eli Mellen

    02/22/2023, 2:25 PM
    Does anyone have future of code flavored papers by folks who aren’t white dudes? I’ve been pulling together a reading list for an engineering reading group at work, and would like to make sure it’s at least a 50/50 split.
    i
    k
    • 3
    • 2
  • j

    Jarno Montonen

    02/22/2023, 2:43 PM
    I'm in need of two solutions: 1. Generating a language model out of ANTLR grammar. Preferably in C#. 2. Printing an AST to text according to an ANTLR grammar. I found this https://github.com/miho/VMF-Text, but anything else?
    g
    • 2
    • 2
  • j

    Jarno Montonen

    02/23/2023, 12:32 PM
    Another thing I'd be interested is solutions for bidirectional text transformations (for source code). Ideally a system in which you could define transformations once, and get both AtoB and BtoA transformers
    w
    • 2
    • 2
  • j

    Jason Morris

    02/28/2023, 6:11 AM
    What do you do to motivate yourself to update documentation? I have so much of it that it is now a significant undertaking to keep it up to date with the tool, and I am losing interest. I'm trying to figure out whether I should just tag most of it as out-of-date and come back to it later when there are enough users to justify it...
    j
    l
    +2
    • 5
    • 5
  • n

    Niall McCormack

    03/01/2023, 9:03 PM
    What's the general consensus on node based scripting? I'm intrigued by Unreal's Blueprint node based scripting tools - they seem easy to use, but if you want to do anything complex then (for me) it becomes very messy very quickly. However with the general move in the past 10 years or so to more functional programming and serverless etc then perhaps it makes sense. Small components that can be wired together visually feels easy, or right? Darklang is another example with extrapolates the complexities of the underlying system allowing you to just write some pseudo node based (at least when I last looked at it) components that are easily wired up together. I'm an iOS engineer by trade, and it feels that something like Darklang / node based coding could end up matching nicely with SwiftUI's declarative syntax for UI.
    m
    w
    +4
    • 7
    • 11
  • g

    guitarvydas

    03/03/2023, 7:50 PM
    Multi-single-tasking: Brainstorming, half-baked... I would have ignored Ceptre in the past. It claims to be a language for writing games. The very idea makes me yawn. But, one of the guys at the Torlisp monthly meetup is deeply into robotics and Scheme and another guy, in the film industry, uses Racket for hobbying in game programming. My own interest is in concurrency and simplicity and compiler-writing. These fields are all related. Watching the 2015 Strangeloop presentation about Ceptre piqued my interest. Ceptre is logic programming, but with a twist - it has a built-in notion of explicit ordering. I thought that I could knock off a better game language using my diagrams of state machines. I continued to learn about Ceptre. Aside: Ohm-JS has built-in explicit ordering and is “not” context-free. I have to wonder if Ceptre is to generalized formalism as PEG (Ohm-JS) is to context-free grammar formalisms. Dunno yet. FYI, I watched the Ceptre talk. I then read the paper and now am reading the thesis. And in the background (foreground?) I am trying to convert Dungeon Crawler (.ceptre) into PROLOG. I think that Ceptre can be simplified down to a small handful of primitives which are easy to express in PROLOG or Lisp or JS or ..., but they are not the first thing that you think of when programming PROLOG. From there, of course, I would expect to generate code for Dungeon Crawler in Lisp and JS and Python and … In the back of my mind is the question “Is This Steam Engine Time?” (Paul Morrison). Are we seeing a shift away from single-threaded languages (Python, JS, Rust, Haskell, lambda calculus, etc.) to ???. Certainly, hardware in 2022++ is drastically different from hardware in 1950 and we should be finding better ways to cope with this New Reality (“The Great Reset in Computing”)… FYI: The drastic difference in hardware is the reality that we now have cheap CPUs and cheap memory. Both of these notions were completely unimaginable in 1950. Instead of crushing our hardware with bloatware like Linux, we can simply throw rPIs at a problem, each running single-threaded programs. There is no need to fake out multitasking anymore. Multicore is just a clumsy way to bridge across the two drastic realities, i.e. to force-fit 1 CPU programming languages onto many-CPU-programming. In fact, we shouldn’t even call CPUs CPUs anymore, since there’s nothing Central about them. Early adopters of 1950s computing built games. Maybe early adopters of 2022++ computing will build new kinds of games with 1,000s of PUs, for example 1 processor for each player and for each NPC. Ceptre: [

    https://www.youtube.com/watch?v=bFeJZRdhKc▾

    https://futureofcoding.slack.com/archives/C5U3SEW6A/p1674614304225439 Call/Return https://futureofcoding.slack.com/archives/C5T9GPWFL/p1675094970899729?thread_ts=1674396396.762359&amp;cid=C5T9GPWFL I have not investigated this, but it, too, appears to be barking up the same tree:

    https://www.youtube.com/watch?v=5YjsSDDWFDY&amp;list=PLcGKfGEEONaDO2dvGEdodnqG5cSnZ96W1&amp;index=28▾

    FBP (Flow Based Programming) https://jpaulm.github.io/fbp/
    r
    • 2
    • 8
  • j

    Jared Forsyth

    03/06/2023, 6:54 PM
    Do y'all know of any editors with undo/redo behavior that's more interesting/granular than just scrubbing through all of the edits you've done to a file in order? I often find my self wanting "undo the last change to *this function*" 🤔
    i
    b
    +8
    • 11
    • 20
  • i

    Ivan Lugo

    03/16/2023, 2:05 PM
    This is an incredibly thinky group of folks. I’m wondering how, if at all, this little community has been using these LLMs and advanced chat bots. People are playing Pokemon in a text/CLI form just by asking “let’s play pokemon but with text and skip all the boring parts”*. I have to conclude that a number of you folks have made some crazy strides in the work you’ve been doing or how you’ve been refining your ideas with these tools.
    e
    g
    +12
    • 15
    • 45
  • m

    Mariano Guerra

    03/18/2023, 12:11 PM
    If you are near London you can think together in person! https://twitter.com/Mappletons/status/1637042911228440580
  • m

    Mariano Guerra

    03/21/2023, 9:14 AM
    Is there a "Grammar of Data Schemas/Constraints" similar to "Grammar of Graphics"? Any schema definition language you find interesting?
    k
    • 2
    • 1
  • w

    wtaysom

    03/24/2023, 7:17 AM
    Friends, I don't know what to make of developments in AI these days. Having worked on dialog systems in the aughts and having loosely followed developments since (I recall preparing a talk around 2010 which left me pretty enthusiastic about ML applications in contrast to the App-and-Facebookification of "tech" — that was on time horizon of a few years, which ended up being a decade plus), every day I check in on Twitter I see more exciting stuff than I can possibly process. I was just writing someone yesterday about how in six months time, we'll have LLMs acting as the front-end to knowledge bases and rigorous computational systems, and then we'll need to focus on getting the human, AI, and formal model all on the same page. As has already been noted in #linking-together today, my estimate was off by roughly six months. Consider, "I've developed a lot of plugin systems, and the OpenAI ChatGPT plugin interface might be the damn craziest and most impressive approach I've ever seen in computing in my entire life. For those who aren't aware: you write an OpenAPI manifest for your API, use human language descriptions for everything, and that's it. You let the model figure out how to auth, chain calls, process data in between, format it for viewing, etc. There's absolutely zero glue code" https://twitter.com/mitchellh/status/1638967450510458882. If you can tolerate his prose, Stephen Wolfram has a long post https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/. The "Wolfram Language as the Language for Human-AI Collaboration" section is most relevant to Future of Coding. What do these developments mean for the Future of Coding? And how are you all holding up? Me? I can hardly process what's happening, let alone what to do about it.
    k
    t
    +8
    • 11
    • 37
  • n

    Nick Smith

    03/27/2023, 4:51 AM
    Here's my perspective on LLMs and the future of programming. I don't believe that the introduction of LLMs that can write code is going to obviate programming. And I don't believe that it is now pointless to develop new programming languages. Instead, I think LLMs are going to make programming and FoC research better, by automating one of the least interesting parts of programming: fiddling with the minutiae of syntax, language constructs, and libraries. I think programmers will still have plenty of work to do. The profession is not doomed. But to justify this, we have to take a step back and consider all of the activities involved in programming. Firstly, what is a "program"? A program is nothing more than: • A formal specification of the behaviour of an interactive system • ...that computer hardware can execute (after translating it into machine code). To emphasise this, I will use the term "formal spec" in place of "program" for the remainder of this discussion. GPT-4 can understand formal specs, and also everyday English. Thus, if we can describe the functionality of a system in everyday English, GPT-4 can (attempt to) translate it into a formal spec. But writing the formal spec is just one activity of programming. Altogether, programming (or perhaps "software development") involves several activities: 1. Determining what functionality the system being developed "should" have. This is done either by talking with relevant stakeholders (e.g. the future users), or by directly observing deficiencies with their current practices. 2. Expressing that functionality as a formal specification, i.e. "coding". 3. Verifying that the specification correctly implements all of the functionality of step 1. This includes practices such as reading and reviewing the specification, as well as testing the software. 4. Validating that the implemented functionality addresses the stakeholder's problems. 5. Repeating the first 4 steps until the stakeholders are satisfied with what has been developed. Here's my hypothesis: In the next 10 years, LLMs might radically reduce the amount of work required for step 2, but only step 2. Steps 1 and 4 are very human-centered, and thus can't be automated away — at least until we are at the point where we have an omnipresent AGI that observes all human practices and automatically develops solutions to improve them. Similarly, step 3 will not be automated any time soon, because: • The plain English descriptions that we give to LLMs will often be ambiguous, underspecified, and maybe even inconsistent. Thus the LLMs will have to make educated guesses at what we mean. (Even if they are able to ask clarifying questions, there will always be some choices that are automatically made for us.) • LLMs will occasionally get confused or misinterpret what we say, even if we are clear and careful. We will not have infallible AIs any time soon. So let's assume that LLMs can automate most of step 2. What does this mean for those of us developing tools and technologies to improve programming? Is our work obsolete now? Will the AI researchers and AI startups be taking the reigns? I don't think so! There is still a huge opportunity to develop tools that address step 3, at the very least. (Steps 1 and 4 are harder to address with technology.) In particular, step 3 involves the task of reading source code. When an LLM spits out 1000 lines of JavaScript, how do you know that the code implements the functionality that you wanted? You have to verify that it does, and for large programs, that will be an enormous amount of work! As we all know, no amount of testing can prove that a program is correct. Thus, we cannot verify AI-generated programs just by using them. Maybe the program has a subtle bug, such as a buffer overflow, that might only be triggered 5 years after the program is deployed. Or less insidiously: maybe the program just doesn't handle certain edge-cases in the way you would like it to. Either way, a human should probably read through the entire program with a keen eye, to check that all of the logic makes sense. There's clearly an opportunity for FoC researchers here: we can make languages and tools that make reading and verifying the behaviour of programs easier! Some examples: • We can design programming languages that are vastly easier to read than traditional languages. How might we do that? Well, "higher-level" languages are likely easier to read, since they are likely to be more concise and focus on the end-user functionality. So work on higher-level programming models will continue to be valuable. To complement this, we can (and IMO, we should) invent new syntaxes that are closer to plain English, such that the specifications that LLMs produce are accessible to a wider audience. • We can design programming languages where it is harder to write erroneous programs. For example, we can design programming languages that cannot crash or hang (i.e. Turing-incomplete languages), but which are still general-purpose. This reduces the kinds of errors that a human needs to consider as they verify a program. • We can design better tools for reading and interrogating source code. (For example, better IDE support for navigating and understanding the structure of large codebases.) • We can design better tools for exploring the space of behaviours of a running program. (Perhaps similar to the tools discussed in Bret Victor's "Ladder of Abstraction" essay.) Overall, I think the future is bright! I'm going to continue my own PL research project (a very high-level language) with as much vigor as ever.
    w
    j
    +3
    • 6
    • 10
  • j

    Jarno Montonen

    03/27/2023, 7:30 AM
    On the heels of the "LLMs and the future of programming" discussion (https://futureofcoding.slack.com/archives/C5T9GPWFL/p1679642239661619, https://futureofcoding.slack.com/archives/C5T9GPWFL/p1679892669316079), I'd like to start a more concentrated discussion around their effect on Future of Coding projects. There was already some sentiment that LLMs are going to kill FoC projects. Some yes, but certainly not all. So what kind of FoC projects LLMs will not kill?
    n
    • 2
    • 2
  • i

    Ibro

    03/27/2023, 2:32 PM
    Has anyone tried to square “computing is a metamedium” (able to simulate all other mediums) with “medium is the message” school? I can think of a number of places where existing mediums can’t be emulated by a computer. But curious where people might say that’s an inherent limit vs a stage of expression computers have not yet reached (I think probably a mix of both). Is thinking there’s a limit “bad” if you are interested in the Future of Coding or does it help in some way with maximizing on strengths?
Powered by Linen
Title
i

Ibro

03/27/2023, 2:32 PM
Has anyone tried to square “computing is a metamedium” (able to simulate all other mediums) with “medium is the message” school? I can think of a number of places where existing mediums can’t be emulated by a computer. But curious where people might say that’s an inherent limit vs a stage of expression computers have not yet reached (I think probably a mix of both). Is thinking there’s a limit “bad” if you are interested in the Future of Coding or does it help in some way with maximizing on strengths?
View count: 1