Let's start a thread for this discussion, rather t...
# thinking-together
i
Let's start a thread for this discussion, rather than creating a lot of scrollback for the whole channel. Here's @Adriaan Leijnse's prompt:
IIRC Xerox PARC & co used the most powerful computers available at the time to "invent the future": the graphical environments and programming languages regular people use now.
What would be the equivalent today?
My (naive) impression is that programming language and HCI labs don't seem to make use of 64 core processors with a 100 GB of RAM and GPU computing to similarly invent the future.
🙏 2
👍 2
The way I think of this is... what hardware can be purchased or built (or even conceptualized) that stands a good chance of inevitably going mainstream in the future. That's what the Alto and Dynabook and Memex were in the past. Here are some things that fit that criteria today: • VR goggles • AR goggles • Augmented hearing devices • Dynamicland (h/t @Doug Moen) • worn tactile feedback devices • anti-grav / hover • foldable screens • micrometer-accurate device positioning / orientation awareness • extreme battery efficiency, low heat production • hyper sensitive microphones with fantastic signal/noise and extreme frequency response • quantum computers • gigapixel resolution still and video cameras • centimeter-scale GPS • hemi and full spherical cameras • stereoscopic cameras • metamaterials • $1/kg to LEO • vacation homes at L1-L5 • the fucking holodeck You get the idea. My point is — I see no shortage of opportunities to apply the model "spend enough money and imagination today, and you can work on the technology that'll be ubiquitous in 10-30 years"
❤️ 4
o
Sorry to ask, but what "LEO" and "L1-L5" mean? Not sure it is that important, but a least I will learn something new! 😉
d
@ogadaki LEO is Low Earth Orbit and L1-L5 are Lagrange points: gravitationally stable locations in outer space within the Earth-Moon system. These are not part of my FoC vision yet; clearly I'm not thinking big enough.
🌜 1
😂 1
🍰 1
m
I can open another thread if it make sense, but I want to introduce a different direction on things to explore the FoC without spending too much. Let's say today I buy the highest spec gamer PC I can buy, thorwing pseudo random numbers here: 64 core i9, 64GB of RAM, 2TB SSD, highest spec graphics card. Now compare those specs with the xerox alto: Bit-mapped black and white display sized 606x808 (the same dimensions as a regular (8.5"x11") sheet of paper, aligned vertically) 5.8 MHz CPU 128KB of memory (at the cost of $4000) 2.5MB removable cartridge hard drive
if someone wants to do the orders of magnitude improvements it would be nice, but it's pretty clear that pharo today is not the same orders of magnitude better than the hardware improvements.
👍 1
We have advanced, but for each step we added much more complexity and non performance abstractions. I really believe that buying the future to today and program with future tools today is a really good idea, but another thing to do is to create abstractions and tools that reduce the complexity and performance issues of today's hardware (which is incredible) and take advantage of it.
another direction is taking advantage of the possibilities of devices working together. Right now on the table I'm on there are two laptops, 3 smartphones and a tablet. There's nothing interesting I can easily do with the 6 supercomputers I have idling at arms length.
☝️ 2
💡 1
my tl;dr: we have supercomputers and seamless connectivity already, it's not expensive and we are not taking advantage of them. We can go buy expensive future technology, but we should also try to take more advantage of the awesome technology we have today (and that's cheaper/free more accessible to more people)
👍 2
what if as Sun Microsystems said "The network is the computer." 🙂
Let's put it as an analogy, instead of seeing what we can do with faster cars, we can also try to solve the traffic jam 😛
also with bikes or public transport, exploring/improving alternatives should be as productive as going further in the main dimension
s
@Mariano Guerra “There's nothing interesting I can easily do with the 6 supercomputers I have idling at arms length.” Exactly. If the argument is that faster computers enable us to make better tools, why haven’t significantly better tools appeared over the last several orders of magnitude in computer performance improvements?
👍 1
👍🏽 1
d
I'm building the FoC on nRF51 chips alongside my pocket supercomputer! I believe we have to to cut out all the crap and start again.
And after building with 1802 chips as a kid, I feel guiltily lazy having the luxury of a 32bit Cortex M0 as my baseline, but you gotta give yourself a bit of a break!
a
I hadn't considered things like VR at all, much less any of your other suggestions @Ivan. Thanks for the inspiring list. It's also my feeling that we just don't have the programming models to exploit what we currently have though.
👍 1
t
Lynx will run on the cloud, which means it'll make cloud scale general purpose computing accessible to an everyday person with zero configuration
I am excited to see what people will invent with it
m
@tbabb is there something online to check lynx?
t
I wonder, for example, what new things could be done with "a huge amount of computational resources spent for an affordably short period of time". You want (comparatively, relative to personal computing) a galactically huge calculation done, but you need it ready in under a second.
@Mariano Guerra Lynxtool.com has a pitch and mock up. A non public demo will be ready in a few days if nothing goes wrong
👍 1
m
looks great, love some of the solutions you use for abstracting, choices and iteration
👍 1
a
That looks very cool, @tbabb! Have you considered modeling time like in FRP?
s
For me, better software abstractions are far more valuable than 10x or 100x in hw performance. For example, most of the time I’ve spent on my project has been reimplementing the abstractions we were used to for desktop app development in Javascript for the browser and none of this work would have been helped by significantly faster hardware.
👍 2
🙃 1
o
And there is also the problem that, we now have "futurist hardware" but we don't fully use them for programming. I mean, I learned programming on terminals with monochrome text display and a keyboard, using emacs to edit streams of characters and compile them with the command line, then execute the results, and eventually debugging it with a text debugger. Nearly 30 years after I do nearly the same now, without fully exploiting the "futurist hardware" we have now (i.e. graphical display and the mouse) for programming. Our programming tasks still always come down to editing text streams. So I guess there is work to do on this, and better futuristic hardware or maximum performance won't help. Even if I found very exciting new dynamic medium à la DynamicLand as new ways to create/programm stuff.
d
My only hope for better hardware in the future (the future forwards from the 80s when I was doing the hoping) was that people would be freed from programming models that cared about keeping the machine happy! But still we program in imperative languages and have used up all that power in a tottering Babelesque tower of shite.
m
I’d add storage and bandwidth to the list of resources to target at a high level. If everyone had 1PB of storage at home, we should all just have e.g. a local copy of Wikipedia. Maybe a web index too. Then search queries could be near instant.
👍 1
i
@Doug Moen
LEO is Low Earth Orbit and L1-L5 are Lagrange points: gravitationally stable locations in outer space within the Earth-Moon system. These are not part of my FoC vision yet; clearly I'm not thinking big enough.
IPFS has this as part of their FoC vision. It's a really interesting way to frame problems in distributed system design — latency on the scale of minutes. How do you design a "realtime" collaborative editing process with that sort of time lag? How about a multiplayer game? Going to space in the good ship imagination might help you recognize terrestrial constraints that are hard to perceive otherwise.
👍 1
@Mariano Guerra
64 core i9, 64GB of RAM, 2TB SSD, highest spec graphics card.
vs
[...] Bit-mapped black and white display sized 606x808, 5.8 MHz CPU, 128KB of memory, 2.5MB hard drive
[...] it's pretty clear that pharo today is not the same orders of magnitude better than the hardware improvements.
This sort of look at the orders of magnitude glosses over something that, to me, seems essential — the resolution and quality of the display is a cost, not a benefit. It's computationally expensive to drive a 5K display with 24-bit color at 60 Hz, both in and of itself but also because of the work needed to produce graphics that look good on such a display. That work eats up a not-insignificant chunk of your orders of magnitude gain in CPU, I/O, RAM, and GPU. Then consider what it takes to load and store assets (like videos or textures) at that resolution and quality, and there goes a not-insignificant chunk of your HD and network. It's not all lost to bad abstractions and complexity creep.
Those hardware gains were motivated by the needs of creators and consumers of digital artistic media. I think we programmers look at those gains, with our strong sense of mechanical empathy, and we understand the sheer power, and we feel envy, for our work doesn't really know how to make use of that power. One (such as I) would just love to ask their company to buy a new Mac Pro for programming, but we all know that we don't need it any more than our accountants or salespeople need it. Sigh.
Yes, there are too many layers of abstraction and we're leaving about 1000x performance on the table. But the latency of everything I do as a programmer is only between (spitballing) 2x and 20x the perceptual limit. Nobody has made the case that solving this latency would allow the essence of programming to be changed for the better. @Steve Dekorte nailed it in his first comment in this thread.
This is why I'm fond of visual languages. It's easy to imagine how they could leverage modern hardware. The one I'm building is 3d, and realtime depth-of-field plays a key role in signifying layers of abstraction. I've never seen this attempted before, and I think it's interesting, and it's only practical given mainstream 2010s-era technology.
This is also why I list things like VR, AR, micro-precision sensors — these things will gladly gobble up every watt of computing power you throw at them, so it's entirely possible that at any given moment we're only just gaining the raw computing power needed to realize a FoC concept that leverages them.
Let's take this further — the same feeling I get when I think the words "Aliens probably exist, _somewhere out there_" or "We understand less than 1% of physics" is the feeling I get when I think "there are ways of programming that are vastly better, but that won't be possible until CPU/GPU/IO/RAM are 1000x what we have today".
It's not the increase in HW that's interesting — in fact, I think it's worth ignoring that aspect as anything other than a support structure. The interesting thing to focus on is all the stuff you can do with that HW — miniaturize it, attach it to your body, run it with very little power, make it aware of its surroundings — imagine where that stuff is going and what other things like it are coming, and then design a programming built out of that stuff. I believe this is how to carry on the spirit of the Alto, DynamicLand, et al.
d
Yup 😄
s
One thing faster machines are helping with is making the underlying stack on which we abstract less relevant. We can pick the stack on other features we value like ubiquity, security, and platform independence as long as the combined sw+hw stack performs well enough for our primary use cases. Btw, isn’t it remarkable how the folks that said dynamic languages are great but we can’t afford the 4x sw performance hit haven’t changed their mind after 10,000x in hw performance increases?
✔️ 1
i
What makes me sad is that the most minuscule of tasks — motion to photon, renaming files, opening windows in the file browser — take just long enough that you can perceive the latency. In many cases, things are less "snappy" than they were on Windows 2k and Mac OS 9, which themselves were less snappy than DOS and Apple II. We've improved 10,000x in capacity, at the expense of regressed instantaneity. And based on the choices made to unlock that 10,000x, it's going to be next to impossible to kill that perceptual latency. (Obligatory cite: https://danluu.com/input-lag/)
✔️ 1
s
I used to think the same until I went to the computer history museum and found the old desktops to be much slower than I recalled. Maybe memories are in relative-to-past (vs absolute) terms, so a memory tagged as “fast” meant fast wrt experiences with previous machines. My speed metric is time for booting, opening apps, using sliders, and moving windows, not key presses in a terminal.
👍 1
d
well I think we lurch backwards and forwards: things get stodgy then something comes along to make it snappy again. examples: cvs (snappy), then clearcase/subversion/etc (omg) then git (snappy!!); Netscape/IE (stodgy), firefox/chrome snappy!, then firefox went stodgy, but I understand they're making a comeback too.
👍 1
node snappy, then npm etc etc stodgy then yarn snappy then npm not so bad
but I'd rather we just binned the whole stack and started over tbh
I keep having to replace my phone when it goes stodgy, but I understand that's by design 😄
d
@Steve Dekorte: you're right, @Ivan Reese is right. The early Macintosh was painfully slow: minutes to start an app. But with no pre-emptive multitasking, no virtual memory, no L1/L2 cache, and synchronous execution of CPU instructions, it was a real-time system. With careful coding, you could guarantee that the latency of certain selected UI gestures involving the mouse or keyboard was below the perceptual threshold. And I definitely noticed when this ceased to be the case.
👍 2
i
Thanks @Doug Moen — exactly my point. I have a Mac SE in my closet, and I pull it out every once in a while when I want to be reminded what the old days were like. You press a key and the resulting action often (but not always) happens faster than on my i9 MBP. (Now, I just gotta find a copy of Hypercard somewhere...)
s
I took a moment to open a terminal and several text editors on OSX and couldn’t detect and lag between pressing a key and seeing it displayed. I also don’t notice any lag in Slack. Am I less sensitive to the lag or have I just become accustomed to it?
d
For me, the lag gets worse the longer my machine has been running since boot, and the more programs are loaded into memory. Switching to a program I haven't interacted for a while, I'll see some lag that resolves once the code for the interaction has been loaded into cache. At some point, the lag grows past an inflexion point, and the only thing to do is reboot. Note that I have "only" 4GB of RAM, and I would probably have less lag with more memory.
s
Seems like some of you here will enjoy this: https://danluu.com/input-lag/
i
@Stefan I linked that exact article slightly higher up in the thread. Nice! @Steve Dekorte I think I'm atypically sensitive to input lag for two reasons. One is that I play a lot of different video games on a lot of different devices, and I've learned how to detect things like "I need to hit the buttons on this console a hair earlier than normal because this TV is adding what feels like 50ms input lag" — and I often do things in games that require or reward frame-perfect timing at 60 Hz, so I'm used to the feel of 16ms increments (this is pretty common among gamers, though they might lack the technical language to articulate it). For two, I play and record music, so I'm used to feeling rhythms "straight" or "in the pocket" or "early" or "late", which amounts to a +/- 20-100ms difference depending on the groove. I have a friend who can consistently place the feel of his drumming about 10ms ahead of the beat, which sounds amazing and confounds me. So in this light, the motion-to-photon measured by Dan Luu being at best 30ms is shameful. Why can't we have 3ms motion to photon for something as simple as text editing? The speed of light is 299,792 m/ms. That's a lot of wire. And 3ms is only 10x ahead of where we were in the 80s, so if our hardware is really "10k times better", that should be easy. I think this puts the lie to the claim that HW is that much better. It's not — it's just increased in certain measurements by that much. Whether those measurements are useful, or matter, to things like programming tool design.. is arguable. On a practical level, I don't usually notice the input lag in most native Mac apps. But I do notice it in Electron apps, like Slack and Atom and Hyper, as those often introduce a frame or two of extra latency. Mouse lag is often much worse than keyboard lag, too, so much so that I'm willing to bet you've noticed it.
👍 1
Here's what we're up against — 3d games are doing relatively insane amounts of work in 8ms, but it still takes upwards of 50-100ms to get that work out to your eyeballs: http://www.chioka.in/what-is-motion-to-photon-latency/
d
On the original 128KB Macintosh, there was no lag when you resized a window (by dragging a corner of the window and dragging with the mouse). The animation was: you drag around an outline of the new window boundaries, and the outline tracks your mouse with zero latency or lag. The window contents were redrawn when you released the mouse button. Window resize animation is laggy in all modern window managers. Google 'laggy window resize'. This bothers a lot of people. I tried the Cinnamon window manager on Linux a year or two after it was first released. All of the animations were so laggy that I found it unuseable. That's hopefully fixed now, but maybe today's UI designers are less concerned about laggy interfaces than in the past?
s
I doubt today’s UI designers have much of a choice — it’s the stacked layers and layers of libraries and frameworks you go through to change a pixel’s color on screen these days that as a UI designer you’re more or less completely removed from what’s happening when a frame is drawn. It’s all about the graphics stack of your platform that decides on how laggy your window manager feels. If it feels laggy like you describe it, it usually means that it’s not even hardware accelerated, which given today’s hardware architectures is kind of expected. And if your platform is the web, then you’re going through an extra stack of however the browser decides to leverage the drawing APIs of the OS. Rearchitecting the graphics stack is one of those areas where we will at least see some interesting progress driven by major platforms, because even with the capable hardware of today it’s still a performance bottleneck. And since the hardware architecture for graphics has changed so dramatically — from CPU to 2D acceleration to fixed pipeline 3D to dynamic pipeline 3D to fully customizable GPU compute — all the classic GUI frameworks are practically outdated and hard to effectively adapt to the new world of GPU compute. There’s an opportunity here for platforms to differentiate, which we will unfortunately pay for with fragmentation and graphics standards such as OpenGL becoming obsolete.
d
Onex is built on Vulkan. Just sayin'. 😄
s
While I like low latency user interfaces, does anyone feel significantly lower UI latency would actually make a noticeable improvement on their own productivity (outside of first person gaming)? If not, is it worthwhile to focus on it as an area of particular interest to FoC?
d
Good UX is clearly one important part of good FoC, and I would say we shouldn't think "outside of first person gaming" - Minecraft, for example, shows how satisfying UX can be in a creative environment. It's tactile, responsive, immediate. Surprising, considering it's Java!
I actually think we should add sound effects to actions in our IDEs 😄
👍 1
s
Low latency is not just a productivity thing (probably even less so at this point), it's how the UI “feels”. Now with gestures becoming more important on touch screen devices (and probably soon with AR), the UI needs to be capable of not just drawing at 60fps but also process sensor input at that rate or preferably an even higher multiple. Naturally, some people are more sensible to how a UI feels than others who are more… flexible… in what they accept as an interface they like to work with. As we approach more direct manipulation and perhaps soon AR overlays to the real world, people will want the virtual UI feel more like the real world — more physical with kinetic scrolling, “bouncy” animations, visual/audio/haptic feedback, etc.
s
I hear you, and again, I’m not saying latency doesn’t need to be “good enough”. I just don’t think the sw stack needs to be rewritten to be good enough. I also agree that how UIs feel is extremely important, which is why I find it odd that most FoC related projects are painfully garish and cluttered. Is there a lack of sensitivity to how this feels?
🍰 1
s
I’d say yes, there’s a massive lack of sensitivity to this and to UI and interaction design in general, but then that’s an extremely biased opinion I formed as part of a job to educate and help developers with this on a platform that cares about such things (and which has in fact been rewriting the whole stack from the ground up, down to and including the hardware). If we have to take care of this ourselves is a separate question. I hope we don’t and platforms will take care of this for us. We “just” need to be aware and make sure that we adopt the new ways and APIs offered to take advantage of this.
g
This aritcle and the one it links to in the first sentence show just one area where much of this power goes that my 8bit computers didn't have to deal with. https://lord.io/blog/2019/text-editing-hates-you-too/
s
@Stefan Can you think of any examples of good design when it comes to FoC projects?
s
@Steve Dekorte What do you mean by “good” and what do you mean by “design”?
s
@Stefan as that answer would likely fill books (for either of us) maybe just use those words in whatever sense you normally do to make these conversations possible
s
@Steve Dekorte I'm glad we’re on the same page about the fuzziness of that question. I appreciate you trying to make this specific, but it sets us up for misunderstanding if our interpretations differ (which they likely will). With that disclaimer, here a few things that came to my mind while pondering about how to respond to your question: I think the original VPRI STEPS project was an appropriately (good) scoped and defined (design) research project. It had a very specific albeit pragmatic goal (<20k LOC) that all the research could be aligned towards. I don’t know about many FoC projects that are as bold to limit their scope that aggressively. I wish there was a STEPS2020 project that is similar in spirit, but perhaps focused on a smartphone as a platform instead. Bret Victor's early demos are good examples for tastefully (good) designed interactions for tools. I'm leaning towards the future of programming is (a) for everyone (what some call “end-users”) and (b) will likely not be called coding or programming, and his early demos point roughly in that direction with their focus on immediate feedback. That we tolerate a horribly slow feedback loop with “simulating computation in our heads” is still the main difference between us programmers and people who “just use” computers. As will not surprise you, I am very bullish on the Swift programming language. I think it's exceptionally well architected for the changing requirements of the future device landscape (e.g. https://gankra.github.io/blah/swift-abi/). I believe your project can’t be designed for the future of programming, if you only consider interaction modes from the past, like keyboard and pointing device. I like what Andy Matuschak and Michael Nielsen are doing (https://numinous.productions/ttft/). Some might say it's a little outside the scope of FoC. I’d say it’s right at the center of it. Nicky Case (https://ncase.me/) is a great example for someone who knows how to program today and does great things with that knowledge. I often think one big achievement of a FoC project could be to end up with more people like Nicky, enabling them to publish similar things, but without having to know nearly as much about programming. Not sure if any of this is remotely what you were looking for. At least it was a good exercise for me to think more clearly about it.
👍 2
s
@Stefan Thanks, it’s interesting to hear your thoughts on it. When I talk about design, it’s usually about aesthetics (which includes look&feel aka UI&UX). What’s curious to me is that if you look at any list of projects related to visual programming, almost all (at least IMO) have visual designs that are cluttered and confused and generally break every rule of good design. I’m guessing this is because these are mostly programmers diving into the world of visual design, instead of visual designers getting into programming. Maybe more collaboration from people on both sides would be helpful, but I don’t know if there’s an awareness that these designs need help.
💯 1
s
@Steve Dekorte You’re pretty much describing the situation with mobile apps as well. Tons of developers try to build apps, few work with designers. You’re also touching on a related problem: many people think design is just visual aesthetics. That is for sure a part of it, but user experience means a lot more. Now that most regular people most of the time use pocket-sized supercomputers with touchscreens and all kinds of other sensors, how it feels becomes more important. Animations, fluid transitions, direct manipulation, continuous gestures, physicality, feedback, and integration become more important and will distinguish good design from great design, until what used to be considered good feels broken. A typical use case for me is having four to six small computers, several of them wearable, on me and actively using them simultaneously. That’s phone, watch, two wireless headphone earpieces, and sometimes a tablet with a pencil all working in concert together. I skip a track by tapping on my watch although the music is actually playing on my phone and streamed to my headphones. The music stops playing when I remove one of the earpieces and continues by just putting it back in. I copy something on my phone and paste it on my tablet or notebook. There's not much visual UI involved, but that’s user experience. There is a ton of potential in this kind of integration across devices. But people still discuss whether tablets are going to replace notebooks, like the web will finally make apps obsolete any day now. ;-)
s
@Stefan If it’s true that visual aesthetics, user experience, and systems design are all related to experiential sensitivity, then those insensitive to unnecessary complexity in one, may be more likely to have a similar blind spot in the others.
🍰 1
s
@Steve Dekorte That’s an interesting way to put it… 🤔 You (and people still following this thread) might find this interesting: http://www.jonkolko.com/writingAbductiveThinking.php I came at this from the perspective of abductive sensemaking as the more abstract process of design that can be applied to all the domains listed (and others). That is the level on which I see a connection between these domains — we’re all designers trying to solve problems. I figure the more specific advice I would like to synthesize from all that is something like, “Hey, let’s take a step back and look at what we’re trying to do before we start discussing implementation details.” The problem is that this step back takes us uncomfortably far out of our comfort zone of technical expertise into all kinds of weird, hand-wavy, unquantifiable, emotional stuff that’s harder to deal with, but arguably more important.
👍 1