https://futureofcoding.org/ logo
#devlog-together
Title
# devlog-together
j

Jason Morris

05/12/2023, 4:11 PM
Had an interesting experience this week, I'd like to share about the effect of changing the visual context of my tool...đź§µ
By way of background, I'm building a tool that is a) a block-based declarative logic programming environment for statutory knowledge representation, and b) some basic expert-system-flavoured interfaces for interacting with that code in a more friendly way. I'm doing the work for the Government of Canada, and they are using it for experiments in something called "Rules as Code", which is basically "hey, how could government work better if we had a data format for laws that allowed computers to understand what they mean?" </background>
I'm not a UI person, but last week I went to the trouble of coming up with a basic template for the tool that actually resizes correctly, etc. Basically from "barely functional" to "almost decent-looking". I basically needed to re-learn bootstrap and django templates from scratch, but I got it done. But doing that gave me the knowledge that I needed to do something that I have been wanting to do for a while, which is to take the friendly user-interface part of Blawx, and wrap it in a template to make it look like a Government of Canada website. Meaningless, from a technology perspective, but potentially super-persuasive as a tool for helping people inside GC gain an intuition for what the tool can do.
Here's what it looks like.
Here's what I noticed that sort of surprised me. The very moment I saw it, my mind went to a completely different category of problems. I started noticing the things that were missing that made it less compelling in this context.
Where most of the last month and a half has been spent worrying about how to get the event reasoning system to treat certain predicates in a closed-world fashion, suddenly I was very concerned that there was no list of caveats, no "don't use this for real, dog" warning, no set of instructions for how to use the fact editing interface, etc... All of those are things that I might have noticed about the tool when it was wrapped in my own template, but they occurred to me only occasionally as future nice-to-have. In this context, they seemed like absolute must-do-now shit.
I still haven't decided if that instinct is right, or wrong, but that's not the thing I'm curious about. I'm curious about whether this idea, that the context can change what you notice about your tool, change what seems important... if there is a way to use that to help control what you notice.
I'd love to know if you have experienced this sort of thing, or if it's just my weird brain, and in particular if you have ever specifically chosen or avoided contexts for your work so as to obtain or avoid certain kinds of insights.
Like now, I'm wondering if I should ALWAYS have been doing it this way, because it focusses me on better things, and/or it makes the result more approachable to my users.
(or, if I should be avoiding it so that I don't put carts before horses)
m

Mariano Guerra

05/12/2023, 5:23 PM
Related: https://notes.andymatuschak.org/Effective_system_design_requires_insights_drawn_from_serious_contexts_of_use Effective system design requires insights drawn from serious contexts of use Scrappy prototypes are great: they allow scrappy iteration and quick evaluation. But many critical insights will only emerge in the context of a serious creative problem that’s not about the system itself. This is a key claim of Insight through making.
j

Jason Morris

05/12/2023, 5:54 PM
That's something I'm hoping users of my tool will experience — that formalizing what you know forces you to know it better, which gives you the opportunity to improve what you know. So tools for statutory knowledge representation are inherently tools for statutory drafting. If you encode the law, you learn how to write it better. But this feels slightly different. Like pallette cleansing, or something. A perception trick.
m

Mariano Guerra

05/12/2023, 5:56 PM
Just went back and transcribed a quote from a video to post it here 🙂 http://marianoguerra.org/posts/screw-it-up-all-the-way-until-the-finish/
A great piece is basically balanced right on the edge of failure and success.
It's just balanced right there.
But you don't really know how or where that line is.
So you're very excited about that idea, it's spectacular to you.
And you go and do it even though you don't seem like it.
You're going into it with a little bit of fear and trepidation to get too close to that line because you don't want to fail and lose it.
But once you do fail it... all that's gone.
Now it's game on.
It's all about just learning, right?
So if it's a piece that you know is going to take four and a half hours and at 3 hours, it's kind of screwed up.
And you just say, okay, let's stop and start over.
Well, you really don't know what happens in hour 3 to 5. You have no idea.
So when you get to three again. Now, you have no idea what's coming.
So my idea is usually if I screw up, screw it up all the way that I can to find out exactly what's hiding, what vocabulary of intuition has not been developed, what part of that language.
So now I've screwed it up, screwed it up, screwed it up all the way until the finish.
We know where things might happen.
So now, when I go back into it, I've got the intuition more developed.
I mean, failure ends up being a good space for discovery, right?
But it's like, if I'm going to fail,
let's keep failing,
let's keep screwing up.
Let's see what's there. Let's go find out.
You know, but if you just stop and put it away and start over, you're kind of missing out on a lot.
j

Jason Morris

05/12/2023, 6:23 PM
Oh, that's a very interesting insight. I regularly find myself saying "I don't think we are there, yet," because there are unanswered questions that are more basic. But that's not actually the point. The point is whether what we learn from going further is worth the effort. Low cost failure is to be sought out. That's what happened, here. I did a bad version of the next thing, it was cheap, easy, and it taught me a bunch. So the control to exercise is to seek out a context that is "the whole thing." 🤔
j

Joshua Horowitz

05/12/2023, 9:17 PM
I can’t think of concrete situations, but I relate strongly to the experience you’re describing here! The superficial trappings of something have a profound effect on how your mind frames it, in a way deep-structure-obsessed CS people often fail to recognize. I think this leads to a lesson for prototyping: Bring your prototype into diverse settings! Take it to the beach. Take it to the art museum. Take it to your parent’s house.
c

Christian Gill

05/12/2023, 9:49 PM
It could be loosely related to the broken window theory. I hadn't thought like that but from this thread I realize that plenty of times when prototyping I (we all probably) tend to just wing it and do things half way "since it's just a prototype" and as soon as I add more structure into it, a "production ready" setup then I start to pay more attention to other details
g

guitarvydas

05/14/2023, 12:06 AM
This idea used to be called RAD - Rapid Application Development. Today, it is espoused in the Religion of Agile development. It used to be a thrust in some pre-CL Lisps. The idea is that you HAVE to screw around with ideas to try to figure out what the requirements are. Users will NEVER specify enough details, so, it is incumbent upon you - the Architect and the Engineer - to figure out what the requirements are. Screwing around with ideas needs to be supported by languages that make throwing prototypes away easy. Building elaborate type systems is counter-productive when screwing around with ideas. Sunk Cost Fallacy - if you’ve spent lots of time working out the details of a type system, you are less likely to want to throw it away. Instead you will want to tweak the elaborate type system, even when that doesn’t make sense. Just about all of our popular languages cause you to enter into the domain of Sunk Cost Fallacy. Predict the Future. Over-confidence in the correct-itude of a design. No room for screwing around and throwing code away. After several, iterative rounds of screwing around, you might Production-Engineer an idea and turn it into an actual product. This is the point where current popular languages (Haskell, Python, Rust, etc.) become useful.
k

Konrad Hinsen

05/14/2023, 8:03 AM
The idea that visual cues set the context for perception and thinking has been around for a long time in various settings, but it seems that it doesn't have a name. Outside of tech, there's "broken windows", but also "dress for the job the want, not the one you have". Mental visualization as a technique for achieving goals plays a big role in Neuro-Linguistic Programming. And some ideas of Feng Shui go in the same direction, though I am careful here because all I know about Feng Shui comes from a single conversation with an expert practitioner.
g

guitarvydas

05/14/2023, 11:23 AM
Yes, riffing on Konrad’s note, I would add phrases such as “language affects thought”, “a picture is worth a thousand words”, etc. I conjecture that purveyors of statically-typed, textual languages “see” structure “in their heads”, but feel forced to pound the structure down for use with pointy sticks and clay tablets. In Physics, I learned that to understand a seemingly-complicated problem required breaking the problem down (“divide and conquer”) then creating a unique notation for that problem aspect (a different notation for each aspect) and to apply “simplifying assumptions” to the new notation to make the aspect-under-scrutiny seem less complicated. While, of course, remaining cognizant of the simplifying assumptions and remaining cognizant of not using the notation beyond its sweet spot. I think that software development is like that, too. You drill down on a particular aspect of a problem, ignoring the rest. When you come up for air, you see other aspects that need to be solved. All popular programming languages are the same and force you to think inside the same kind of box. Simpler solutions to a problem might not be apparent, nor seem very simple from inside the single box. Maybe what is being experienced is the freedom to think - to solve a problem instead of force-fitting the problem for expression within the confines of a single given notation or a single style of thought.
k

Konrad Hinsen

05/14/2023, 12:29 PM
I suspect that programming languages and systems still suffer from early computing technology. In particular batch mode, with long feedback cycles. Think before you code. Run code in your head before punching it into expensive cards. Those days are gone!
g

guitarvydas

05/14/2023, 10:03 PM
“...programming languages and systems still suffer from early computing technology...” I would add that programming languages and systems suffer from early computing mindset. CPUs and Memory used to be VERY expensive. It was unimaginable to use a computer to run just one app. Computing time was meted out in $s and accounted for. The same CPU was shared across business departments and across university courses. Budgets for computing time were allocated. As a student, I was told how many $’s of CPU time I could use for doing my assignments. Backtracking, e.g. Early’s algorithm for parsing, was denounced as impractical. Computing within these restrictions required mutation and memory conservation, i.e. Garbage Collectors and reuse of variables. All programming languages were based on the idea that only a single thread was available. A global, shared state (the callstack) made sense. Some people thought that having 640K of memory was extravagance. Today, though, the ground rules have changed. We can have bowls full of CPUs and memory is ridiculously cheap. Everyone carries around more computing power in their pockets than was needed to land humans on the moon. But, we continue to think in terms of ground rules that accomodate the 1950s mindset, instead of today’s mindset. Supposedly-new programming languages are but variations on themes from the 1950s.
k

Konrad Hinsen

05/15/2023, 7:03 AM
When parallel computing arrived in scientific computing in the 1990s, there were two main reactions: (1) we need new programming paradigms to exploit these machines and (2) we'll develop automatic parallelization techniques so we can run serial code efficiently. Practitioners went for (1) because that's all they had in actual (but primitive) tools. Theoreticians went for (2) but have yet to deliver. Meaning that practitioners are stuck with the primitive tools that have hardly made progress since the 1990s.
g

guitarvydas

05/15/2023, 10:03 PM
Operating systems and thread libraries are artefacts of 1950s thinking. They are based on the meme that “everything must run on a single CPU”. Developers need preemption while debugging code. End-users don’t need preemption, but, are forced to pay for it anyway. In fact, McCarthy showed how to write preemptionless threads in 1956 - anonymous functions (later rigidified into closures). But, this idea was ignored due to extreme allergies to Lisp and its supposed “interpretation”. Instead, people built big, honking closures in assembler and C, using the sledge-hammer of preemption to control ad-hoc blocking caused by function calling. Preemption encourages developers to ship buggy code, a practice that is not tolerated (by Law) in any other kinds of industries. Shipping buggy code has been further embellished with epicycles such as CI/CD.
Today, hardware efficiency matters a whole lot less than it did in the 1950s, except to people indoctrinated to believe that there is only one kind of efficiency. There’s hardware efficiency, Design time efficiency, Production Engineering efficiency, Implementation time efficiency, etc., etc. Attempts at automatic parallelization will never succeed because: 1. a specific solution will always be more “efficient” than a generalized solution (N.B. “efficiency” comes in more than one flavour) and 2. efforts at automated parallelization are based on towers of epicycles which are based on 1950s memes. Preemptionless threading cuts out a lot of bloatware and hardware-supported inefficiency (it costs time to preempt a running process). More recently, Tunney built Sector Lisp in a very pure functional style resulting in a full language in less than 512 bytes[sic], then built an even smaller language (BLC - Binary Lambda Calculus). Reducing the number of types helps a lot, too. Lambda calculus means that “everything is a function that takes exactly one input and produces exactly one output”, regardless of how you wish to spin the inner structuring of input data and output data using destructuring.