Future of Coding • Episode 60
Bret Victor • Magic Ink
Hey, ya'll ever hear of this guy? He posts some wild stuff. Feels like it might be relevant to the folks here. Maybe a little fringe. For instance, he thinks that software could be — get this — better! You might be surprised to learn that I also think software could be better. Radical idea, yes, but it feels like it's finally time for us to take the idea seriously.
Next month, we're reading Peter Naur's Programming as Theory Building, with a little bit of Gilbert Ryle's The Concept of Mind as background.
Personal Dynamic Media
12/10/2022, 6:08 AM
Thank you so much for making this episode! I love Brett Victor's presentations and most of his essays, but I've never been able to get into Magic Ink. I'm not sure why, but I've always struggled both to follow that essay and to stay interested while reading it, so it's great to get your distillation of its contents.
Several of the ideas that you discuss, such as showing the distributions of reviews with tick marks under the stars, remind me of Edward Tuft's wonderful work on presenting information graphically.
I love Robot Odyssey, and also Rocky's Boots which was a similar game, also from The Learning Company, that dealt with solving puzzles by creating digital logic circuits.
For good time, you can play both of them in your browser at archive.org.
It also occurs to me that part of why modern software, especially web sites, doesn't minimize interaction is because the authors are not actually focused on creating the best experience for the user.
Facebook, YouTube, Google, and Amazon are all worse for users than they were ten years ago because they are trying to maximize advertising profits, which means maximizing "engagement," which means maximizing interaction.
Their incentive isn't to create the best experience possible, but rather to create the worst experience that users will tolerate.
This article from the Onion is focused on Facebook, but I think it sums up the whole problem nicely.
Just finished listening…great!
Some thoughts on the whole “drawing the interface” vs. “building a machine arranging widgets using a grammar”:
1. Widgets vs. drawn UI
We used to actually be much more in the “draw the interface” space, basically Views that draw what they have to display. Widgets were added later on for standard interactive elements such as text boxes and sliders etc.
And there was a clear distinction, in that for most applications, the dynamic content (document) would be drawn dynamically using a custom view, whereas ancillary/auxiliary information (inspectors etc.) would be fixed layouts of widgets, which might be dynamically enabled or disabled.
But the whole structure of the UI was largely fixed (dynamic content inside the views + widgets in static layouts).
Although there was a bit of a trend towards widgets, that trend really took off with the Web and the iPhone.
With the iPhone, the content both got more dynamic, partly due to latency hiding with animations, partly due to the small screen making it necessary to hide unused UI, rather than juts disable it, and at the same time our tooling got more static, with UIViews that have dynamic content discouraged and preference given to static layers that are moved in and out and around.
With the DOM, you really don’t have many options except to change the structure if you want dynamic content (Canvas notwithstanding).
So we’ve been moving more and more towards a situation where even purely informational applications display their information via these static/rigid widget sets, at the same time that they information got more dynamic.
2. Drawing first (tooling)
We didn’t just have Flash, we also had Interface Builder. which didn’t quite have a general canvas, but also very much the workflow of creating the visuals first and then creating structure from that.
The problem with those tools is that they produce horrible programs. I will go into more detail in a post I am currently writing on “What happened to MVC?“, but the gist is that to get a reasonably structured application, you really, really need to focus on the model. In almost every codebase I’ve seen recently. people started with the screens (Product Mgt. talks to design, design creates the screens and throws them over to engineering), and so they have at best an anemic model. And so all the coordination and in the end the application goes somewhere in/above the view layer (iOS: the ViewControllers or something else that’s just as horrible). And so you get an intertwined, unstructured, untestable, unmaintainable mess. Always.
And of course that’s also what happened to MVC, because you can’t do proper MVC that way, and pretty much all the problems people think they have with “MVC”, they actually have because they’re not doing MVC.
What’s really surprising about this is that, as you observed, the development tooling actually has moved the other way, so we no longer really have those draw-first tools. But despite getting rid of the tooling, we stuck with the mess that before was the price of those tools! Worst of both worlds.
I do think we can figure this out, but it’s going to be somewhat tricky.
02/12/2023, 2:01 AM
@Marcel Weiher what do you think of Naked Objects from Richard Pawson? This model harkens back to the original smalltalk model of objects being able to present their own user interfaces to the user, and messages to objects could be manifested as dynamic menu items - https://en.wikipedia.org/wiki/Naked_objects
02/12/2023, 11:52 AM
Thanks for bringing up Naked Objects!
IMHO, NO is awesome, and definitely a part of or at least a starting point for the “somewhat tricky” way I think we can figure this out.
I think it needs to be amended somewhat so that any automatically generated (through reflection, not code-gen) UI becomes a starting point that can be refined further. How to refine something that’s only implicitly defined is one of the tricky parts, but probably solvable. WebObjects had a way of doing this with the Direct2Web system.
The other even trickier part is how to also support visual-first workflows. For that to work, the visual editors must be connected to an NO-like core model in such a way that structural changes are passed more or less transparently to the NO core and then reflected back up visually, while the purely visual portions remain in the visual layer. Yeah: tricky.
And I always loved the way IB and AppKit colluded to let the user become an active participant in the web of objects that make up an application, for example with a button wired to send a message to an object or down the responder chain.
02/13/2023, 3:39 PM
Naked Objects... now there's a phrase... I always liked the idea of having a conceptual UI layer one step removed from the rendering of objects on the screen.