My latest article: Spatial Computing with links n...
# share-your-work
d
My latest article: Spatial Computing with links not apps Freedom from the Metal to the Metaverse https://duncancragg.substack.com/p/spatial-computing-with-links-not?r=1sq2dz&utm_campaign=post&utm_medium=web
Re-imagining Apple's Spatial Computing by dropping the desktop metaphor and the apps and having pervasive links instead! Let me know what you think...
Calling out to @Konrad Hinsen @irvin @Naveen Michaud-Agrawal @Kartik Agaram for some of your usual tough challenges! 😄
k
What I see as the main challenge in your vision is dealing with lots of links. In a heavily linked universe, you only want to see a subset in any given situation. For example, when I look at an event in my agenda, I may want to see all e-mails that mention that date (they could be about other events with time conflicts) but not all e-mails sent on that date. In a different context, the latter may be exactly what I want to see. A dominantly spatial organization of data would not support such distinctions.
n
Your object network is similar to Alexander Obenauer's work on OLLOS - https://alexanderobenauer.com/ollos/
Would be interesting to think about how a timeline based frame of reference would work in VR
d
@Konrad Hinsen yes, you'd have to have ways of managing sets and sequences of lists. Lists can be viewed spatially - walking around a gallery is a good visual example. How lists are created is the "internal animation" aspect: a list would manifest itself based on the objects around it - such as the emails. @Naveen Michaud-Agrawal yes, I noticed OLLOS and watched the video when Obenauer posted it; that kind of thing would, as in the cases above, be done as features and functionality on top of what I'm building - as "internal animations" of lists - so you'd have a time-ordered list. It would indeed be interesting to explore 3D renders of that - again, the gallery visualisation comes to mind, where you could walk back in time past objects on the wall in time order. I am hoping to implement intelligent type deduction: so if you type a date, contact details or locations, it's automatically detected and a typed object created.
This is all future work, of course, I'm hoping to find the time to get to a first simple demo based on the code I have so far, within the next 6 months.
k
@Duncan Cragg I see the challenge not in displaying links and providing interaction, but in searching and filtering large collections, in particular of links.
d
OK, good point. I guess that's going to be the same challenge as a MongoDB query in essence. Making that accessible to non-techies being the issue.
I wrote a shorter version of the article I posted, so if you've not read it yet, you will find this one an easier read: https://duncancragg.substack.com/p/spatial-computing-with-links-not-apps?r=1sq2dz&utm_campaign=post&utm_medium=web
g
FYI: meta-comment - I've had interesting successes in pasting URLs into Kagi Universal Summarizer and full text into ChatGPT. At worst, this often helps me see how others perceive my writing and causes me to refactor and to re-word. @Duncan Cragg
k
Thanks! I don't know much about this subject, so my only reaction is, what do you think of transclusion? The hyperlink works in a standardized context: the web browser's notion of a DOM tree. And classic hyperlinks implicitly replace the entire DOM when you click on a link. However this is a poor fit for VR; you have 10x more space, and probably 1/10x the situations where you want to replace the entire tree. On the other extreme, consider (ha) a native app. What can happen when you click on some UI element? Anything is possible. The possibility space is a vast superset of links, at the cost of the standardization and interoperability that characterizes hyperlinks. One way I tend to think about the space is in terms of the target. Hyperlinks have a reference to a payload and a target the payload is placed in. Html defaults the target to the whole tree, and you can also set the target to _blank to get a whole new tab/tree without losing the current one. And that's it. One way to think about progressive web apps is as an app-specific map of possible targets. It's an interesting question to ask what a generic map might look like that covers 95% of possible use cases. Might VR make it easier or harder to come up with such a generic map? I have no idea.
d
The hyperlink works in a standardized context: the web browser's notion of a DOM tree. And classic hyperlinks implicitly replace the entire DOM when you click on a link. However this is a poor fit for VR; you have 10x more space, and probably 1/10x the situations where you want to replace the entire tree.
I've been working on exactly this in my notes, comparing the DOM with a scenegraph and thinking about transclusion, or what I'm calling "seamlessness". My approach boils down to a single global shared DOM with URLs between chunks of it! You wouldn't do that on the Web, but it's exactly what you need for the "Metaverse": a single global scenegraph.
Everything is in smaller chunks and the whole world is built from transclusion
@guitarvydas Wow thanks once again for a great reply. I didn't know about Kagi:
Spatial Computing, as presented by Apple, still relies on the traditional desktop and app model when operating in a virtual 3D space. The author argues we would be better off without apps and instead have all digital content like documents, photos, and media scattered freely in the virtual space. Links could then be used to connect different pieces of content together and allow them to be shared between users in an open shared virtual world. This would create a parallel digital reality where users could explore shared 3D environments and interact with each other's digital content. The author envisions a more open and collaborative version of Spatial Computing where users build upon each others' contributions in a shared virtual space without the restrictions of separate apps.