Somewhere else in this Slack, <@U5STGTB3J> wrote: ...
# thinking-together
o
Somewhere else in this Slack, @Stefan wrote:
I love seeing apps experimenting with what touch and mobile can add to traditional workflows instead of just looking at its limitations.
Every time a new “future” of coding app is presented, and it only works with mouse and keyboard and requires a large screen I’m a little sad of the missed opportunity.
Interesting topic, that I'd like to bike-shed think about together. I fully agree that touch devices and small screen devices are opportunities to imagine new way for programming. But I feel that it is quite hard to address. In particular small screens. Even with text programming, I always feel the need for a big screens. I can't imagine doing this on a phone. And for visual programming, having large screen make it possible to adopt some "map kind" organisation of your programming artifacts, to have nice global view of what your are doing. And something that is challenging, it to design visual representation/manipulation that works well for both large screen/keyboard'n mouse and small screen/touch. What do you think of that? Is there some examples of programming environment that succeeded in this space?
🍿 1
o
On a related one, one of the things I recently thought about was how would programming look like the input mechanism was a pen. Would that have a greater affordance for visual programming for example?
🤔 3
👍 3
o
As a pen is great at drawing things, when programming can benefit from a drawing input device? Maybe one can imagine a visual language that accepts fuzzy drawing of programming artifacts, when "drawn" characteristics (size, shape...) can be part of the language? Bonus: a program can be on paper sheet or whiteboard, scanned and executed.
👍 2
💯 2
o
Yeah, actually someone shared a video of this in the group earlier, a prototype from xerox I think (so not entirely my original idea). I will look for it
m
I am struggling with this feature for the visual programming tool that I am building , there's already some touch support but not 100% as great as the mouse support. But the small screen is the biggest challenge. What I want to try is to have a collapsable tree instead of the nodes with arrows on a small screen. The challenge there is pieces of flow that represent a loop or a node that is connected to from multiple nodes. I would love to be able to do some programming work on my mobile
The shade app (https://shade.to/) that was shared on the #graphics channel looks awesome, but on mobile the small screen is an issue I think
o
Found it by checking my YouTube history instead

https://www.youtube.com/watch?v=QQhVQ1UG6aM

amiga tick 1
👍 1
i
Ivan Sutherland's Sketchpad is the classic "programming with a pen" example.
2
amiga tick 4
r
In an earlier thread someone linked the metamuse podcast and in the first episode they talk about some stylus interfaces they've been experimenting with and how input compares to the desktop. We're still in the early days of touch interfaces and they're ripe for innovation! https://www.listennotes.com/podcasts/metamuse/1-tool-switching-GYobjearH3_/
❤️ 1
a
To make the small screen work, you probably want a ZUI for browsing the code. For touch/pen, I think the end state is a rich (i.e. with steep learning curve) gesture language for control structures, et al, otherwise you're wasting a lot of the potential bandwidth of touch. For identifiers (variables, library functions), maybe select from a menu of things in scope? Btw, radial menus are criminally underused IMO. :)
👍 4
Relating more directly to drawn visual languages, here's an example of a graphical human conlang: https://s.ai/nlws/ (I think someone posted it here not too long ago, so you may have seen it already (: ). Along other things, it features (essentially) higher-order glyphs, which I think would be important for a drawn programming language.
👍 2
❤️ 2
i
I think voice input is also still quite underutilized. I think it makes sense to augment the UI with voice commands too, so not exclusively audio commands. I gather the speed of interacting with an app would be greatly increased, while still having a small surface audio language that could be easily deciphered by the app.
👍 1
💯 4
o
+1 for that a ZUI is needed. And a good one that can really change program artifacts representation based on zoom level.
r
Anyone know of any modern equivalents of Microsoft's Code Canvas? https://www.microsoft.com/en-us/research/project/code-canvas/ It's not particularly innovative when considering touch and stylus as an interface, or even exploring the possibilities of a visual system - but I see it as a incremental and practical step in moving from a fully-featured document based coding system to fully-featured canvas based coding system.
d
@Ionuț G. Stan I found https://serenade.ai recently, and it looks really cool
❤️ 1
👍 4
i
@Daniel Garcia oh, never heard of it! Thanks for the pointer, I'll investigate 👀
r
since we are talking about alternative computer interaction and VR, I am excited about eye tracking. Eye tracking is getting a lot of development because of VR, but I thinking it's an underrated interaction paradigm on it's own. Here is a simple example: I have two monitors set up while coding. On one monitor I have my code editor, and the other monitor has some reference. This is all fairly standard. I can't tell you how many times I wanted to scroll or do some operation on the reference monitor, but the text editor was set to the active application, so I accidentally scrolled the code instead of the reference, loosing my place. Harmless, but quite annoying, especially in a complicated code base. If I had eye tracking, the interface could detect what I was looking at, and keep the correct window active. This is a pedestrian example, but I think there is a lot of powerful UX that could be enabled through decent eye tracking.
❤️ 2
4
💯 1
d
I wanted to buy an eye tracking device from https://theeyetribe.com/theeyetribe.com/about/index.html some time ago, I'm not sure if they are still selling them. I have imagined interacting with computers only with eye tracking and BCI's (Brain Computer Interface)!!
❤️ 2
🤔 1
r
Looks like they were bought and absorbed by Facebook and Oculus https://en.wikipedia.org/wiki/The_Eye_Tribe
😢 1
o
BCI is cool but also scary. What’s to say the input would be one way? 😬😬😬😬😬😬
😱 1
😝 1
🤯 1
d
TMS is indeed scary
😬 1
j
@Ionuț G. Stan Strong agree! In my early experiments with building gestural interfaces I quickly found that gesture + voice was way more effective than either alone.