Software products are physical products. That's wh...
# thinking-together
j
Software products are physical products. That's why they're hard to design and engineer! The physicality of software is kept at arms-length in a vector-based design tool. Here, the strength and focus is on surface level aesthetics and exploration though mocks — many, many mocks. All needed. All useful. But a mock does not a product make. A mock is an incomplete story of software physics. Which brings us the other side of the spectrum... The IDEs — the code editors. The product you ship is here, so "I guess someone has to go there". First challenge: To most people it's walls of inexplicable symbols and weird (even hostile?) punctuation. Then, with code, you're essentially play-acting as a computer. You have to "speak computer" fluently to feel at home here. It's a love/hate relationship of running programs in your head. Mostly failing to do so. Then learning to get better at debugging. Eureka moments of finally solving the puzzle that unlocks a bugfix! Endlessly restarting programs to reset state. Today, even after multiple decades of investment in IDEs, coding is still 100 times harder and less fun than it should be! How can we truly move the needle? A traditional IDE deals with the how to fully describe the physics of a software system. Writing and editing algorithms. Managing data flows. Figuring out logic. Painting pixels. Sending data at the speed of light over the network. But the IDE doesn't actually let you see the program as it manifests to the user in the final medium. It's running somewhere else — the browser, on your phone. This is where vector-tools have the IDEs at least partially beat. Yes, they're just mocks, but the vector-tool tool sees the mocks alongside you. This changes your relationship with the tool. Thinking and touching — moving, dragging, scaling, rotating, duplicating — seamlessly blends on a canvas, and it just feels good, even fun! IDEs and editing code as text offers none of those things with today's tools, and it just feels like... friction! So the letter to Santa reads as follows: Dear Santa, give me the best parts of a vector-based design tool, plus the best parts of an IDE, rolled into one — but with none of the downsides. Thanks! Christmas can't get here fast enough.
🔥 1
❤️ 4
🎸 4
👍 1
g
Define "vector tool" (to me it means "vector graphics", but, it's not clear if that corresponds with what you mean).
j
I mean UX design tools where you edit vector-graphics to mock up UIs. Figma, Sketch, Adobe XD etc.
👍 1
g
FWIW: I solved a similar problem in the print industry by using divide and conquer. (1) Choose a textual language+library (something like Javascript) that produced PDF files for printing, (2) Create a graphical UI and a piece of code that "compiles" the UI to the chosen language ("transpilation", "t2t"). I'm currently doing something like that to make a DPL using draw.io compiled to a textual language (currently Odin, but CL, Javascript, Python in the past). Not exactly rocket science, given that draw.io (et al) produces XML text representing the diagrams and given that text parsing technology took a huge leap forward with OhmJS (and PEG).
i
So, an IDE lets you design the laws of physics for your software, but you do it wearing a blindfold. A vector tool lets you work without a blindfold, but it doesn't let you design the laws of physics. Should we make an IDE that doesn't blindfold, or should we make a vector tool that lets you design the laws of physics? Sure, these might converge to the same thing. But where should you start? I used to be on team "start at the IDE, make it visual". Lately I've been on team "start at the vector tool, make it able to design physics". Now, these two tools come with their own laws of physics already. The IDE's physics have no bearing on the software you make, just the experience you have trying to make it. But the vector tool's physics rub against the visual design you build. For instance, tldraw and after effects each let you mock your software to a radically different degree. It might be easier to fix the janky physics in a drawing tool and then make it malleable, than to make an IDE that lets you see what you're doing and and doesn't rub against your work in a new and bad way.
❤️ 4
🎸 6
g
... FMI - define "laws of physics" for a "vector tool". ...
j
My take on vector tool physics is that it's geometry, which is a subset of the computational physics that IDEs/code operates in.
The catch with going the "start at the vector tool, make it able to design [software] physics" is that you need to be Turing complete, which implies a programming language (or the capabilities of one), needs to be invented. That's a massive cold-start problem.
Reading Ivan's reply again, I think we've now got two meanings for physics in play 😁 • The physics of authoring in the software, e.g. typing text (code) into an IDE, and dragging to draw rectangles in the vector-tool • The physics of the medium: Turing complete computation for the IDE (data + algorithms), geometry for vector-tools
💯 1
The two are highly linked and play off each other.
k
There are skillsets here that it's rare to see in a single person or maybe even community. The right collaboration seems key. Or maybe Ivan could do it. I have opinions on the IDE side but don't even know where to begin on the design mock-ups side.
❤️ 1
j
Agreed. The worlds of drawing tools and coding tools are historically very siloed in almost every aspect. It's a massive challenge to make a single vertically integrated product with these hybrid capabilities.
s
Software product are physical products. But they are also several other kinds of things at the same time. There is something confusing going on, which often leads to discussions about different optimization strategies without realizing that we are talking about very different aspects of software, or different representations of it. For instance, the user interface is pretty much all software is for the user, and there are practices to “do the right thing” to achieve a good user experience. But then as builders we look at how the software is made, which in most cases is source code. And for some of us that is all software is, although here we probably all agree that as developers we need to both consider the source code and the user interface together. But even though these are both “the software”, they follow very different “physics” and need to be taken care of in different ways. I talked about this a little bit in a Beautiful Software seminar a while ago — I put a

10-min clip

with the relevant part up, perhaps useful for the discussion and makes what I wrote hopefully a bit clearer.
👏 1
👀 1
👍 1
g
FWIW ... I belonged to a privileged class that knows a lot about building things with transistors and resistors and capacitors. Then, someone invented an API called "opcodes" which let more people design things with transistors and resistors and capacitors. It seems - to me - that the technical problem of building the dream IDE is characterized by creating 2 sets of APIs and letting the domain experts do their thing on either side of the APIs. Each API would need to have 2 directions - output and input (GUI->ide, ide->GUI, IDE->gui, gui->IDE), kinda like UNIX command pipelines with a better syntax and the possibility for "feedback". API design is the same thing as Opcode design. Opcodes tend to KISS, whereas APIs have become too complicated and riddled with nuance.
💯 1
j
@Stefan Good clip. Really like what you're pointing to with code being an empty shell where data comes later, and only then does the actual user experience emerge. The IDE is blind to all that.
@guitarvydas This bi-directional approach is what we're building towards on top of TypeScript 🙂
👍 1
g
@Jim Meyer I would be interested in a reference to that. You've probably provided one before, but, I missed its significance.
j
@guitarvydas The visual IDE we're developing is called Henosia. I tweet about it as
@jimmeyer
but that's a mixed bag and audience, so DM me if you'd like to talk about some of the more PL-centric topics 🙂
👍 1
s
Have a look at Typst. It does what you're asking for. https://typst.app Typst is an easy way to write documents, but also a complete programming language. You can start writing a document without knowing anything about programming, and then gradually add variables, loops, functions, and even sandboxed arbitrary code execution with Web Assembly. Typst has a source code editor corresponding to the IDE in the discussed example, and an instant preview corresponding to the vector graphics editor in the discussed example. There are some things that are a bit lacking in comparison with the example, they can trivially be implemented. • Typst assumes that everyone will edit the source code even if they're not programmers, and there is an easy syntax similar to Markdown for non-programmers, but it would be trivial to add wysiwyg editing features to the preview and have them backpropagate to the source code to perfectly satisfy the example of a wysiwyg vector graphics editor. Typst already maintains source maps, allowing pointer events in the preview to backpropagate to the source code, so most of the implementation is already there. • Typst is focused only on authoring papers for print, but it would be trivial to add application state, and add features to attach event handlers to elements that can update the application state. After learning Typst, the massive challenge of designing the proposed system turns into a series of incremental improvements that are trivial to implement.
k
@Jim Meyer I'm working on something a bit like what you're talking about. The biggest problem I see at the moment is that the best programmers (and AI to a large extent these days) have developed a meta cognitive layer over plain textual code, that more accurately describes the data structures and flows of information inherent in code. Simplest example is assignment let fruit = "banana" The implicit flow is "banana" -> fruit "banana" is also a constant, whereas the variable fruit, is a store of information that may change over time. So how to visually represent that fruit is possibly changing, and is a store of information, and that the string "banana" is constant, beyond the obvious string marks and let declaration? We could show a flowing string of information that goes from "banana" to the variable fruit. Fruit could be highlighted with water as a background, and "banana" could be highlighted with concrete as a background. Maybe this isn't the best visual representation but you get what I mean. Obviously it gets way more complicated than simple variable declarations and variable assignment but imagine if this kind of visual representation was extended to almost every single aspect of the language? If, we could explicitly show this implicit meta cognitive layer as visually overlaid over the text itself, it would be a revolution for newbies and seasoned veterans. For newbies, they would understand the logic and flow of the code much much faster, and for veterans, their brain power would be freed from trying to visualise this meta cognitive structure, to focusing on even higher abstraction problems. The main problems I see are: 1. Identifying all the implicit structures inside our programming languages 2. Creating an intuitive visual representation of each, that is easily understandable
👍 3