What to people think of tools that try to solve "t...
# thinking-together
r
What to people think of tools that try to solve "the designer to developer handoff problem"? I.e., today usually designers create a static mock-up and hand it off to a developer to implement, but why can't the designer's design tool just output something usable that doesn't need a developer to "translate" it? Here are some recent tools that try to solve this problem: - Webflow https://webflow.com - Framer X https://www.framer.com/ Here are some older tools that try to solve this problem: - Interface Builder https://en.wikipedia.org/wiki/Interface_Builder - DreamWeaver https://adobe.com/products/dreamweaver - Quartz Composer https://en.wikipedia.org/wiki/Quartz_Composer - Origami Studio https://origami.design/ Are there other important ones I'm missing? Interface Builder dates from 1988, the same year as Adobe Photoshop, so we've had a long time to work on this problem, but I wouldn't say any of these tools have been very successful. What makes this problem so difficult? Are there tools that tried to do this before Interface Builder and DreamWeaver? How did they fare? What's the most successful piece of software to ever try to do this? I'd say it's a toss-up between DreamWeaver and Interface Builder, is there anything more successful I'm missing?
👍🏻 2
w
This problem could be viewed of as an instance of the broader tension between programming and direct manipulation. You want to use a tool like DreamWeaver, or Photoshop, or Blender, or Premiere Pro, or … in the UI provided, but you also want to capture some abstract constraints and formulas between objects in your scene. “These two divs should be centered and spaced 25% apart, and have the same width” “The nav should have the same color as the footer.” However, direct manipulation tools rarely give you the way to express these constraints or formulas, since that would be complicated, and require visualizing a program to users in an intuitive way. It suffers from the same problems that visual programming languages more generally (paging the expert @Ivan Reese). Work in the PL community has started to approach this problem of “prodirect manipulation”, specifically Ravi Chugh and co: https://ravichugh.github.io/sketch-n-sketch/
👍 4
Ravi’s work takes the extreme approach of showing the user the raw textual program side by side with the direct manipulation UI. But you could imagine a world where the program visualization would be more intuitive to non-programmers.
v
- IBM Maquetta: http://maqetta.org/
- React Studio: https://reactstudio.com/
g
- sketch systems https://sketch.systems
r
@Will Great point about the difficulty of representing relationships in direct manipulation tools. Another problem is once you’ve defined one of these relationships, what happens if you then want to move an object in a way that conflicts with the relationship? (E.g., can you drag the two divs defined as 25% apart closer together?)
e
This is a very interesting topic, very close to my heart. My own Beads project is about creating an executable specification language. The basic concept is a language so concise and precise that it is the replacement for a purely descriptive document, and since it handles autosizing well, it forms the specification of how the layouts change with different orientations and sizes. Apple's SwiftUI does similar things, however, Beads is platform independent (Desktop, Mobile, Web) and not part of Apple's walled garden. However, Beads doesn't have a fancy graphical aid helper modules yet, and so far in my interviews with designers, the majority of them don't want to consider all the fine details that programmers must consider. So i think for a very long time, no matter how nice the tools get for developers, the handoff problem will continue. I do believe that in the future using "executable spec" types of tooling, that the designer and programmer will work more closely together instead of sequentially, as the final product can be built in one step instead of 2; the designer can do the layouts, prep the art, and the programmer can inject that into the business and screen logic, and produce a product in record time.
👍🏻 3
i
Re Edward's comment:
the designer and programmer will work more closely together instead of sequentially
This is also the direction I can imagine things going. When reflecting on the tools that exist now, which took a job that previously required programming skill and turned it into a job that didn't... what often happened is that these tools created/demanded an all new skill, something that bridged a gap between programming and the domain. For instance, computer animation is a relatively new field with specialized skills like: character modeler, rigger, texture artist, pose-to-pose animator, particle/effects artist. All of these jobs used to be done by traditional programmers working closely with traditional artists, and then the programmers created specialized tools, and the artists forged these new skills so that they could apply their artistic ability directly to that domain using these new tools. The same thing happened with indie games (Unity, Unreal, GameMaker), and electronic music (Max/MSP, Ableton Live, even modular synthesis), and not least of all — spreadsheets! We often talk about making tools that allow people to use a computer "without learning to code". But the existing successful cases of that, to me, appear to be the cases where programmers made tools that required non-programmers to learn a freshly-invented skill — a skill that ends up being less foreign than programming, a skill designed to be similar to their existing skillset, but an all new skill nonetheless. So (back to Edward's point) I do see a bright future for bringing the designer and the programmer closer together — by making tools that allow the designer, after learning a few new complementary skills, to do things themselves that they previously relied on the programmer to do.
🧠 1
☝️ 1
👍🏻 3
j
I don’t know if it this is entirely an answer, but it seems like the direction front-end development is going is to have a UI design team whose product is a component library. http://bradfrost.com/blog/post/frontend-design-react-and-a-bridge-over-the-great-divide/
👍 2
r
@Justin Blank Thanks for sharing, this seems like a different approach to the problem: Instead of doing a handoff (then translation), actually have design teams employ their own developers.
@Ivan Reese That's a wonderful summary of how most creative software gets made:
We often talk about making tools that allow people to use a computer "without learning to code". But the existing successful cases of that, to me, appear to be the cases where programmers made tools that required non-programmers to learn a freshly-invented skill — a skill that ends up being less foreign than programming, a skill designed to be similar to their existing skillset, but an all new skill nonetheless.
What's so interesting about the "designer to developer handoff" problem is that for some reason it has been extraordinarily resistant to creating a new specialized tool that can output something useable. E.g., I believe all the examples you listed "character modeler, rigger, texture artist, pose-to-pose animator, particle/effects artist" output something that doesn't then need to be translated by a developer? I can't figure out what's unique about software interfaces that makes it different from these other use case. (Cheeky blog post from on this topic Webflow's CEO https://medium.com/@callmevlad/a-cheeky-guide-to-creative-tools-e5e3388c4614)
👍 1
d
I think, counter intuitively, the largest step to make towards bridging this barrier is to fix: 1. Performance bottlenecks in front end development 2. Re think browser ui from the ground up 3. Unified query language across the entire stack. Improve those issues and you have room to build an ecosystem where we need less specialization for webdevelopment.
👍 1
i
@robenkleene
I believe all the examples you listed output something that doesn't then need to be translated by a developer?
Yes, that's the whole reason those tools were invented. Prior to those tools existing, and those skills/roles being created, programmers and artists needed to collaborate intimately on every piece of the art. As these tools/skills were created (one at a time, and not very good at first), that need for total attention from both sides was lessened gradually. There was a time, when memory and other performance constraints were very strict, that video game artists would frequently create works that were too rich and detailed to run efficiently, and programmers would need to either collaborate with them to find art production processes that produced a more efficient result, or (sometimes) edit the output from those tools themselves. Eventually these perf-friendly practices became common, and the hardware improved, and the amount of constant, intimate interaction between programmers and artists tapered off. I think web designers and developers are going through a similar progression. It used to be common that a "web designer" would produce a PSD with slices — a format that was basically opaque to the technology of the web. Now, web designers are fluent in SVG and CSS. With tools like Greensock and Bodymovin, web designers are able to produce animations that can run in the browser without much, or any, direct help from a programmer. Right now, we're going through a phase where we're collectively putting a lot of stock in the idea of reusable components, and we're going through the fits and starts of making tools that designers can use to build components. We had the template language phase, and the Polymer/X-tags phase, and the React-style components phase — each one less and less programmer centric and more amenable to designers. We're getting ever closer to something that it'd be worth building GUI around, which (I think) is the tipping point. If there's a successor to React Components in this lineage, I would not be at all surprised to see it become the standard that tools like WebFlow embrace, at which point we'll be in "3d tools in the mid-late 1990s" territory — good enough that artists no longer need to understand what's happening under the hood, but not quite to the point that programmers are free from having to worry about what the artists end up making. But we'll get there.
r
@Ivan Reese Thanks for the great response. Any thoughts on why 3D tools were able to solve this problem in the 1990's but user interfaces have struggled for so long? E.g., Interface Builder being from 1988 is 30 years of working on this problem. I know you're hinting at it, that a lot of groundwork needed to be laid first. But that still leaves the question: Why do interfaces in particular require so much groundwork? E.g., we didn't need as much groundwork for 3D modeling, bitmap/vector editors , DAWs, NLEs, or spreadsheets. What makes interfaces uniquely difficult?
PostScript (1982) to Illustrator (1987) was five years for example.
i
If I had to guess, it'd be that it'd be impossible to have the modern video game or film industry without these tools. Those industries are huge, and the role these tools play is also huge. 3D graphics is fiendishly difficult, especially the realtime stuff. The tools were an absolute necessity. Demonstrably, we've been able to have the modern software industry without similar tools. GUIs are tricky, but not that tricky by comparison. We've been able to make do with not-great tools, tolerating a bit of bugginess here and there, a bit of lost productivity spent doing things the slow way. A programmer can make a bad-but-functional GUI on their own. A web designer can learn a bit of JQuery or Flash or CSS or what have you and cobble something together that mostly works. Typical programmers can't make 3d art that's more sophisticated than a stick figure (without tools that do most of the work), and typical artists have exactly as much affinity for linear algebra (without tools that do most of the work).
👍 1
In other words — if GUIs were harder, we'd have invested earlier in better tooling. But they aren't, so we've scraped by. See also: text-based versus visual programming; all our tools based on terminals and unix and files rather than ASTs; the state of video editing tools (which haven't needed to advance much, so they haven't) versus compositing tools (which have, and have).
👍 1
If web designers had to learn the equivalent of linear algebra, and the formulae for shading models, and how stencil buffers work, and the limitations of spherical harmonics, and (100 other things just to get a nicely lit 3d model to show up on the screen) in order to make a layout, there'd be better tools.
👍 1
r
Nice, great argument. I'd push back on one thing: Designers do actually have great tools today. Photoshop, Sketch, Illustrator, etc... are all wonderfully powerful tools in their own ways. Your point still stands but I'd tweak it a bit: What just isn't that hard is for developers to implement a designer's user interface. That's just really not that difficult, so why not have the developer just do it? One area in user interface design that I can say from experience is an order of magnitude more difficult than implementing a static UI design is implementing custom animations created by designers. So if this hypothesis is true, tools that can output usable animations that don't require developer translation should improve quickly.
🍰 1
(I guess that also points to designers always just throwing their static UIs over the wall to developers to implement, because I'm not sure implementing the static UI part is enough of a pain point to change anything?)
👍 1
i
In my experience, the things it takes for a good modern UI animation toolkit (whether it's declarative like CSS transitions / animations, or imperative like Greensock or Velocity.js) include: • Spring easing (things animate with apparent mass) • Interruption (the user does something that triggers an animation, then cancels it — the animated elements need to gracefully return to the correct position, not just snap back) • Physics/dynamics-based interpolation (things have acceleration and velocity) • Sequencing/staggering (things happen one after another) (This is in addition to the slightly longer list of things that are essential, like the ability to set durations). The Webkit team tried (and so far failed) to get spring easing added to CSS, but progress on a more general solution has been slow (https://github.com/w3c/csswg-drafts/issues/229) — and I haven't seen any mention of interruption or dynamics. I believe that once we have those things in the platform, we'll be able to make simpler versions of tools like Bodymovin or more powerful versions of tools like TheatreJS (https://www.theatrejs.com), and at that point it'll be barely more difficult to add GUI animation than it is to add drop shadows with CSS.
👍 1
j
@robenkleene I don’t think it’s entirely different, because it’s bringing development into the UI team, but removing a lot of the extraneous complexity of previous forms of full-stack development. So the components end up very modular, which is what lets designers tackle it. I think if you remove enough extraneous complexity, the distinction between code and domain knowledge becomes very thin. There’s a quip out there whose origin I forget about “the pseudocode formerly known as traditional mathematics”.
👍 1