Are there any languages with explicit focus (and p...
# thinking-together
j
Are there any languages with explicit focus (and philosophy) around intermediary steps between ideas and working code? Modern programming languages feel like I must, in part, produce completed structures of code, rather than brainstorming and exploring. Programming inevitably alternates the code from working to not working to working to not working, etc, and we would benefit from languages that were more explicit about facilitating the ‘not working’ states. From my rough intuition, the ‘not working’ states of code outweigh ‘working’ state in terms of time and importance (e.g. compiler for code that is exploratory but does not work). REPL, for a very simple idea, is praised so much (and so widely used) perhaps for this reason — it facilitates intermediate steps. This would be a rough parallel to how Bret Victor mentions that “ideas are important” to him. I looked at Exploratory Programming, but it doesn’t seem to capture exactly what I want (i.e. goes a bit too shallow). EDIT: Also relevant, Histogram and

Type-driven Development

m
https://hazel.org/ has first class concepts of errors (missing code, binding errors, conflicts, type errors)
👍 1
unison promises that your code is always in a valid state: https://www.unisonweb.org/docs/tour/
👍 3
dynamically typed languages support incomplete programs as long as you don't step into the incomplete code 🙂
in smalltalk there's people that fill the code as they go in the debugger
☝️ 1
v
I think @Josh Cho is saying that during development you try out many ideas and finally get to working code. Working code is checked in to VCS, while all these steps to this working code are lost, while they might be the most valuable to learn. Am I right?
👍 1
j
Yes @Vladimir Gordeev, not to get too philosophical, but humans always focus more on what is than what is not (i.e. yin-yang). So we are bound to focus on what works, our successes, rather than what does not work, our failures, or our intermediaries. This kind of human bias/inevitability is bound to be present in computing, which is often too fast for its own good. Maybe we can step back a little and think about the in-between…? http://tomasp.net/histogram/ is the only instance where I have seen anyone do some thinking like this to a sufficiently satisfying extent.
👍 1
@Mariano Guerra I am looking at Unison right now for the first time and it’s very interesting. Scratches as an alternate to REPL might have some interesting ideas in this space.
s
"Livecoding" is about continuously modifying the code as it runs. This doesn't necessarily improve the experience with having non-working code, but it does let you work with partially-working code and "code as you go". The system I am working on (https://alive.s-ol.nu) takes an alternate approach to the common REPL-based livecoding that supports continuously changing FRP programs at runtime.
j
The psychological equivalent would be how long it takes for our nonverbal, implicit ‘thoughts’ or qualia to become verbal, explicit <thoughts>. We have intuitions before explicit thoughts, and modern psychology has focused too much on what can be observed (verbal thoughts, or even worse, behavior) that it has missed out on large portion of our psyche. Intuition precedes thoughts, but we only observe thoughts because they are easy to observe.
@s-ol Yeah live-coding or even something like Orca (which has no distinctive distinction between code and output) are also important in this space. But I think we can definitely go further — the idea is simple but its adoption is surprisingly slow. I think this shows how people are willing to just withstand discomfort of ‘not seeing’.
w
I would separate methodologies that structure process from languages that enable (or make easier) aspects of process. Type-driven and test-driven development are both methodologies that focus on incrementally developing a program specification through types or tests. A language like Idris or Hazel supports type-driven development by having an explicit notion of holes. A language like Pyret supports test-driven development by allowing unit tests to be co-located with functions.
👍 1
i
There's also an entire discipline and ecosystem around non-functional or minimally-functional prototypes. There's paper prototyping, wireframes, modern HyperCard-like tools — hell, the UI prototyping group at Apple used to (maybe still does) use Keynote presentations for user testing because they're so much faster to make, and you can do a bunch of tricks to make them feel like a working app.
👍 1
c
was thinking about "comment-driven development" a while back and came up with this sketch: typically when i'm writing a complex series of transformations, i'll write out all the steps in individual comments and then "fill in the blanks" with code once i have an outline for the whole transformation. would be interesting to have the comments exist as "blocks" that could be expanded and contracted (and nested?) so you'd be able to zoom out and see the entire system at a glance without looking at the code
d
Similar to this is gradual typing where you start "sloppy" and gradually add guarantees to your code. It's an area of active research (typescript being the most well known)
It all depends on your definition of working. We could easily make a language where every library/function had a default (noop) and only good states could be entered (like scratch). But does it really help you sketch out your idea?
l
(@crabl I'm working on something like that! I imagine there is a gradient from detailed/machine code implementation - to abstract/natural language description; Imagine if you're able to start out each "block" at any level, and then iteratively add details as needed. Combine that with an auto-complete that works more like google search than word completion, but context sensitive, and somewhat semi-structured such that some computer-processable "meaning" may be derived (declarative probably preferred); Then, the "comments"/most abstract/top-most (as in top-down) "notes" would "always" be "in sync" with the implementation, and you'd have something "useful" at each level of detail)
c
i've been using "outliner" apps for the last few months (OmniOutliner, specifically) to jot down thoughts and keep notes in a more structured way. what i've found is that the outline medium contributes greatly to the breadth of what i'm trying to express and allows me to both dive deeper to add detail or collapse elements to see the bigger picture. (which is tangential, but related to what @Don Abrams said about starting "sloppy" and adding guarantees). as far as what you're building @Leonard Pauli, it seems like the real utility there is in the comment blocks that you get "for free" as a result of building them first. a point of contention though: how do you make sure that when you change the code at a lower level, that it will "line up" with the comment you wrote before? what happens when your mental model for how it should behave or be built differs substantially from how you actually end up implementing it? the nice part about using types as your guide, as Don alluded to, is that they are an intrinsic part of the code: if the types don't line up with the values you're passing in, your program won't compile. how can we make the same guarantees for natural language?
d
Yeah, sadly they are usually "adjunctions" rather than "isomorphisms" (sorry for the vocab)
l
I suppose you could be confusing and say define "yes" as "false" on a lower level, if the editor doesn't already have a relation between those two "concepts", though the system would still be "consistent". For the "comments" to be in sync with the implementation, they have to be connected somehow. Ideally, the "comments" would be isomorphic to the actual implementation. One end, natural language/plain text, the other end, binary machine code. Typescript types allows you to get closer to NL, though I want to get even closer. My current plan is to sacrifice NL free-form, thus gaining realistic ability for structured representation of the "comment", that then "becomes" the implementation. Instead of trying to solve "complete NLP", the autosuggestion will (hypothesis) push you into using a parsable format. Thus, you'll get 80% of the way with 5% of the effort. As with types, the words you use will either have to be explained further, or linked to existing concepts, until you reach the base types. As with "io-monads are not necessary in a complete system", this might maybe only work fully where everything is declarative. The outline is really a graph structure, where one node may exist in multiple places, and be aliased to fit the context/DSL. If you change the meaning of a concept, you'll see all its names, and possibly choose to change them (the name or the connection). Though this is not a sufficiently complete solution... A theory is that the "next big FoC paradigm shift" would require quite a lot of parts coming together and contributing natively to each other. Many ideas might be "interesting" on their own, but in isolation, they provide less value then the current status quo. I've often found it hard to communicate certain aspects of the system, as you would have to imagine all parts being there at once. (saw a quote by Tim Berners-Lee, stating that he had to mask the web as a documentation system and build it for real, as stakeholders couldn't fantom to imagine it filled with all the sites we have today, and thereby not recognizing its potential). Though when all the components connect, you get something "many magnitudes better" than what we have today. Well, that's the idea at least :)
m
@Josh Cho extremely late to this but check out Mary Beth Kery's work like Variolite https://marybethkery.com/projects/Verdant/variolite-supporting-exploratory-programming.pdf
j
@Max Krieger Even later, but that is very cool. Sad that the implementation was in now defunct Atom. Following up with her to see if there are more developments.