Core of a futuristic system - use cases. A) What t...
# thinking-together
v
Core of a futuristic system - use cases. A) What things should the programmer be able to do but is not able to because of the limitations of today's computers and systems? B) How should a interaction of a programmer with a computer look like in the future? C) If you had 10000 hours of free unencumbered development hours, what would you do?
w
Good questions. I'll give a go... • Arbitrary Limits? Good OS-level IPC. It existed in 19-bleeping-77 friends https://dl.acm.org/doi/abs/10.1145/800214.806544! (For reasons that were entirely lost on me as a young man, my father gave me a physical copy of that paper as a gift.) • 10,000 hours? Should be enough to digest the links on my desktop. By then I hope I'd have a clearer picture. • Future of programmer interaction? Dancing.
v
I clarified the questions. I'm not clear on what you meant by the first answer. Second, many tabs and things to do is a problem we can all work on; I think not having time to read everything is a common problem, and more, knowing about the things you would like to read is even harder (cause you do not know what you do not know) -- I think this community can help with that. Dancing would be ... interesting, maybe with better AR we'll get there soon 🙂
w
I was being a bit obscure for effect... What should we be able to do but can't for no good reason? IPC should be a lot easier. It certainly is in a language like Go, but that's the exception. Now I'm looking at the seven devices I happen have within reach that don't reliably communicate with each other despite that being half their job. With 10,000 hours, I just wanted to make the observation that what I would do still depends on learning a thing or fifty. As for dancing, it's the embodied aspect of AR/VR that holds more promise than the obviously more easily seen visual aspect.
👍 1
v
Universal communication between devices seems exactly as a thing I was hoping to get as an answer. It is basically sci-fi and we are in awe when some devices manage it, instead of it being standard as in the movies 🙂
Regarding coding in AR/VR I wonder whether at some point we will be able to code by gesturing - moving us from coders towards wizards.
k
We should be able to understand programs we didn't write with lots of history. They shouldn't grow monotonically more complex and baroque over time. I'd be deliriously happy to have just this without any new interactions. Oh also, in my futuristic utopia programmers would have a new duty: to inspect the software they use and raise concerns with how it's written. Be an immune system for society. Ask not what your computer can do for you. Ask what you can do for your society's computers. Don't ask, "why does this not exist yet?" Be the change you want to see in the world.
n
A) Create apps of moderate complexity in days, not months or years. B) Depends on what you mean by "interaction". I'll answer as if you meant the hardware interface. Before augmented reality: same as now. After augmented reality: nobody owns a monitor anymore; instead everyone owns a pair of AR glasses. Initially, everyone carries around a collapsable keyboard (instead of a laptop) for productivity on-the-go. Eventually, wrist-worn sensors become good enough that people can type efficiently via tiny hand-movements.

Something like this

, but on your wrist. After brain-computer interfaces: textual (and other) input via subvocal recognition. (In fact, maybe we won't even need brain probes for that.) You'll notice my answer implies that I believe programmers will continue to use text-based input for the foreseeable future. Indeed I think this will be the case: natural language is the most dense visual (and aural) representation of arbitrary information that humans can efficiently encode and decode (we can decode visual imagery quickly, but we can't encode/construct it quickly, and it's a poor medium for describing behaviour). I'd wager that successful programming languages will (during our lifetimes) always rely on natural language. After all, programs are nothing more than a description of real-world behaviours that the computer should enact, and humans have always described real-world behaviours (e.g. "run away from the lion") using natural language. We've evolved to be particularly good at it. C) I'd continue to do research and design toward a programming environment that makes programming orders of magnitude simpler and more efficient, as I am currently (when I'm not employed).
w
@Václav Blažej On "AR/VR ... code by gesturing - moving us from coders towards wizards," I go in the direction of CAD, info-vis, and, I guess, Hypermedia. Suppose we take some CAD-like shapes and manipulation as our primitive data and operators. This base quickly becomes a fairly rich programming environment by adding templates/holes/lambdas, a way to reify the history of operations (enables scripting), and a parameterizing/monadic mapping sort of wreath product to better explore/visualize possibility space. Say we have little a model here. I select some measurements: a length, an angle. Factor them out, so now we have a function that given a length and angle, returns a model. Go grab a ruler (representing a range of lengths) while I grab a protractor (representing a range of angles). Apply your ruler and my protractor, now we have a grid of variants. Select a pretty path through that space where the models "look okay," then we can factor out the path out as a new measurement relating length to angle. From there we might step out of the session (reifying the steps), so as to make a workflow given different models returns different measurement relations. The trick is hinting the system as to what length/model/path to pick. Or, as another example, I want to see a spacial hypertext edition of the Complete Tales of Beatrix Potter so that you can go though the stories in their standard order, chronological order, or, you might follow one character Benjamin Bunny until you find the more interesting and unsavory Mr. Tod. Same deal for any complicated text. Mostly, I've explored doing this with execution traces. Same general techniques apply. Huh. Guess it could work for "total situation awareness" (surveillance) too. Man, that was a long time ago.
@Nick Smith "wrist-worn sensors ... type efficiently via tiny hand-movements ... subvocal recognition" — when advising a friend on a science fiction play, I suggested the actors, when addressing their ambient digital assistants, would have the habit of swiping with their finger as a way to signaling to the computer that it was being addressed, kind of like in the old days when people would touch physical screens in order to type. Then if they needed to manipulate something projected in the air instead of touching it directly, they would point with their index finger and grab with their other fingers. Can be more precise than the kind of pinching you do on the HoloLens and Quest. I honestly thought we would have tracked gloves before we gesture recognizers were good enough to work without assistance — not that it's quite there yet.
n
@wtaysom
I honestly thought we would have tracked gloves before we gesture recognizers were good enough to work without assistance
I'd wager tracked gloves never happened because they aren't socially-acceptable and/or convenient enough. Entrepreneurs probably concluded that the number of people willing to wear fancy gloves in the niche markets where such an input device would be beneficial wasn't high enough to justify the venture. The current state of VR demonstrates how little inconvenience people are willing to tolerate for a certain benefit. I never use my Oculus Quest, because even though it's sitting right there, clearing a space in the room before putting the headset on (if it hasn't gone flat) and strapping on the controllers is enough of a nuisance that I don't bother to "pop in" and see what's new.
v
@Kartik Agaram I like the points about easier understanding of old code, its aging, and review, but the last line has me puzzled. To understand what we need to change we should understand why it has not been done so far because there may be crucial reasons that make the idea non-perspective. Also, any change needs widespread adoption to be the society-level change which by itself is a big undertaking. (You may say I'm against misleading motivating mottos but maybe I understand it incorrectly)
k
Certainly, if it's fruitful. But it doesn't often seem so to me. The fact that computers are less than a hundred years old seems to explain all the observations. We tend to be used to having a lot done for us in the world of atoms, but in the world of bits we're still in the stone age. We don't have many things because nobody has built them yet. I don't consider my last paragraph just mottos. I'm not a good enough writer to come up with better phrasing than cliches, but the underlying sentiments seem useful. We're most of us here occupying privileged positions in an individualistic society. I for one certainly need periodic reminders to eat my vegetables.
Apologies if my original comment sounded snarky or dismissive. It was intended only to be vehement. Basically my belief system on your question is: • The future is a pyramid with more than one summit. There's a million different cool UI ideas we can try. • However, they will need to share certain common foundations. • We don't have good foundations, and I believe the foundations we currently have compromise every experiment we run on whether something higher up is a good idea or not. The current foundation is ok for short-term problem solving but terrible for long-term vision. So FoC folks in particular need to think about the foundations. • When I look around, most people here aren't working on the foundations. We tend to gravitate towards the cool interactions. They make for better demos. But they also tend to have a hard time leaving the demo stage. We're wallowing in fun desserts, but there's a collective unwillingness to eat our vegetables. My original answer is my current best guess at the most foundational problem of all. Today all our ways to work together rely on compatibility. Compatibility is in effect a complexity ratchet. That's not sustainable in the long term. If we fixed that I think it would release tremendous energy to explore multiple summits. Now, it's possible I'm wrong. We do see new platforms created fairly regularly. And people do port good ideas to new platforms regularly as well. But we also seem to "port" the old social arrangements and incentives, the way we organize. On balance I'm still skeptical that our foundations are ok. What does a better foundation look like? I'm not sure, but I feel confident that it's more parsimonious. The more software we depend on for our FoC projects, the more likely they are to fall into disrepair because we lose the energy to keep them up to date. On the other hand, if we have too little then people are less likely to try out our projects and we get less signal on what works and what doesn't. Doing a lot with as little as possible feels like the crucial problem for FoC.
3
v
@Kartik Agaram
However, they will need to share certain common foundations.
What do you mean by foundations? In UI design, i can see that, but on backend there can be many different underlying systems as long as they can talk to each other.
most people here aren't working on the foundations
I'd like to do that but I'm not sure if our context is the same. I guess we'll see in my posts in the future.
We do see new platforms created fairly regularly.
Again, can you please expand on what do you mean by plaforms. Like programming languages / frameworks / OSs?
We tend to gravitate towards the cool interactions.
I wonder, is FoC composed mainly of people that do UI?
k
I was deliberately leaving things open ended. No matter what level you're at, I think there's some value in thinking about what's below you. Some projects may intrinsically need multiple interoperating backends, but if yours doesn't, if that's just an implementation detail, then having 2 where 1 will do is just adding fragility. Rule of thumb: each level should change 10x more slowly than the level above. I think that's not the case in software like it is in hardware. When it's not the case, it's a sign that some layer isn't pulling its weight. The stack needs to streamline by dropping layers. This is where we can all contribute a little at a time.
v
About human-PC interaction; I find it quite difficult to find well-defined information (where fuzzy search should not be necessary). What I mean is that pure data such as weather in New York yesterday, or best known way to solve Fourier transform, or prices of oranges in Germany over the last year, etc. You say to yourself: It's not a problem, I'll just put it into a search bar ... I'd expect these to be a faceted search over well defined domains. Imagine you want to find something about a notebook model from 2022 in year 3022; I don't think a fuzzy search would be an appropriate tool for that.
👍🏼 1
r
Regarding coding in AR/VR I wonder whether at some point we will be able to code by gesturing - moving us from coders towards wizards.
This was basically what Jaron Lanier was exploring with phenotropic programming (https://www.cl.cam.ac.uk/~mcm79/ppig/files/2018-PPIG-29th-lewis.pdf), though sadly only his accounts remain as evidence.
I honestly thought we would have tracked gloves before we gesture recognizers were good enough to work without assistance
The VPL glove was big in the first VR craze including Lanier's experiments in programming. Hand tracking not requiring hardware is a huge plus, especially hardware that would need to adapt to fit a wide variety of hand sizes. Gloves could provide force feedback though, which makes things feel a lot more "real".
As for my takes on the original topic: A) the ability to find out "why" any part of a system came to be the way it is. B) a natural voice interface + VR holography sounds very neat, though honestly basic keyboard+mouse interfaces are a struggle now so I don't see how we get there without some deep changes. C) build a killer demo that gets more collaborators and funders on board.
v
@Riley Stewart killer demo of what? (you pbly talked about this in another thread?)
w
@Riley Stewart Even the lightest spacial haptic feedback feels pretty magical. Without any resistance, you can still make objects that feel light and squishy.
r
@Václav Blažej an object-oriented programming environment, more details to come
v
Another thing that comes to mind is how hard it seems to be to enforce the code to follow project standards. Meaning that in your project you have some things that you want to always be true (like: view will call controller, and mustn't call model methods). We can attempt to enforce these with language barriers and some frameworks do this for us by prescribing how thing should be done, but I feel one should be able to fix these rules for themselves and code a 'checker' which can check for there rules and give advice on how the things should be done in the project. This would also serve as part of the documentation for developers.
w
"enforce the code to follow project standards" — Ah, my long neglected friend Aspect-oriented Programming!
v
Nice; didn't know that this could be addressed with aop, but it seems this concept is loosing interest for a while now. It also seems to concern only a part of the issue. What I mean is that we should be able to describe standards in one place no matter which part of the project the standard talks about. E.g. we may want to define standard file names and file structure, standardize method names, enforce documentation, force standard syntax, check for ineffective implementations of standard constructs (like bad regexes), and so on.
v
very nice 🙂
🙂 1
w
Yeah, I don't know why AOP hasn't caught on more because from a software system architecture perspective, it lets you say all sorts of useful things that aren't really suited for a regular type system. Like five minutes ago I'm writing in essence, "When you implement this ImportantHandler, its importantMethod should set importantInstanceVariable1, importantInstanceVariable2, importantInstanceVariable3, but nothing else should."
v
Tied to the original topic and my answer about "Universal communication": I am also missing an easy way to work with simple data in most programming languages. Usually, you download a library which manages some conversion, but shouldn't this be built-in? As our abstractions get more abstract we need these atomic operations to move from instructions to wider area of operations.
k
One knock against AOP in general is that it involves mutations to arbitrary points in a codebase, which is too much action at a distance particularly for large teams. It's just too easy to end up with spaghetti. However, I'm not sure anyone's tried using AOP just for checking without modifying the codebase as you suggest, @wtaysom 🤔
w
Interesting. In my mind, a feature is most aspect appropriate when you can turn it on or off without conceptually affecting the semantics of the code being advised. Otherwise you're implementing the concern rather than some tangentially related cross-cutting concern. So validation, error recovery, logging, especially logging are my idea of typical aspects. I mean sometimes the aspect is essential like it's nice to use an aspect to acquire locks in some standard way or, complex memoization, or hey right now I'm writing something to sandbox changes you make to model objects so that you can roll everything back in case something goes wrong late in the process.
k
Yeah you're right. I think we're saying the same thing. I consider all those to be changing the source code. Features. But imagine aspects that never change a line of code in the program.