Question for anyone in the know - was there ever a...
# thinking-together
n
Question for anyone in the know - was there ever a formal system description of the RealTalk system powering DynamicLand? My current understanding has mostly been pieced together from tweets and a few of @osnr's more detailed blogposts. As I currently understand it, the system functions mostly as a federated Linda-like tuple space where wishes and claims are evaluated 60 times a sec and acted upon, and there is a base set of verbs and nouns (mostly related to the hardware) that are implemented in the system.
c
Not to my knowledge is there anything online as a specification, because the system was constantly changing and being experimented with.
I understand that it's implemented as a superset of Lua which probably was getting desugared to Lua. IIRC from my convos with Joshua Horowitz is that the whole system (a couple years ago at least) has one system input "time" which advances everything else forwards.
v
There is a photograph of the source of the realtalk system in one of Bret Victors tweets. I believe it used either Ohm or OMeta as the parsing system.
n
My understanding is mostly from "Background of Realtalk" section from Omar's "Notes from DynamicLand: Geokit". Looks like while they've been fairly quiet recently they are still continuing on with research.
s
I took some photos and videos when I visited a couple of years ago (with the Bay Area FOC group) that I need to share
specifically I have some real-talk photos
Didn't get much but here are a couple
1
I believe this is a lot of the code that parses\desugars the natural language datalog query language used in "say" and "when" statements, there is probably a lot missing though
I'm also not certain that Dynamicland has made it. Walking by their old space in Oakland it seems vacated, but maybe they just moved
c
It has been a wild year with COVID-19.
n
@Scott Anderson thanks, so most of those pages look like the pure lua bits.
❤️ 1
s
I thought COVID might have done them in because of the physical computing aspect
good thing investors (or whoever is supporting the project now) aren't thinking so near term on this
w
Keep in mind that RealTalk isn't really documented because it was always provisional. Beyond two takeaways, the rest of the details (all the syntax, the use of Lua) are irrelevant. 1. Loose coupling between components. They don't directly reference each other. Often physical proximity was the way of doing dependency injection. No need for explicit patching when valid wirings are mostly unambiguous. 2. Having a distinction between "wishes" and "claims" where a "claim" tells you something about the world, and a wish requests that the world change to have some quality. Can't think of anything else. I mean there's the whole spacial/collaborative aspect of Dynamicland but that aspect is pretty obvious.
5
z
Recurse center has a project inspired by realtalk https://www.recurse.com/blog/132-living-room-making-rc-programmable and the code is online https://github.com/living-room/turing
n
@wtaysom thanks, I think you've nailed the underlying idea. Everything else (projection and computer vision, use of Lua) is just an implementation detail of the current manifestation. I expect that if touch-reactive eInk and arduino/microbits were cheap and easily programmable, the whole RealTalk system could have been built using that hardware.
c
@Scott Anderson What did you make of DL? I want to like it because of how much I love BV's work, but... it just doesn't click for me, from what I've seen...
v
I visited Dynamicland and tried to give Bret some ideas for fundraising. This was when they had lost the funding from SAP. My advice to him was that he should make a version that could be run on a single desktop (mini projector and cam mount of sorts) so that people could build these as kits for the home. I also suggested abstracting out the events similar to what TUIO has done. Not sure that that was part of his plan, but oh well....
w
@Vijay Chakravarthy "TUIO?" To me it seems Bret worries a lot about people misinterpreting his intentions and so an important part of Dynamicland was to maintain a certain clarity by making a hard break from conventional computational systems. Working under the same roof as the IHMC Robotics Lab, I was always jealous of visceral immediacy of the robots. You can simply see people working on them whereas practically all screen activities have a certain disengaged, lazy posture. Since then, I've been keen to make the abstract more embodied. It is somewhat unfortunate that reality developed a strong bias against the kind of close in-person collaboration that Dynamicland envisions.
👍 1
v
I actually think we are at the beginning of what will be a big shift in user interactions for computing. Given the prevalence of multiple devices and visual and auditory understanding of intent (via AI) I think that there is a big shift possible in how people interact with computers in both single and group environments. For the group environment just the fact that everyone brings their own touch surface (mobile device) is a pretty interesting model for collaboration.
☝️ 1
n
I wonder if Bret is worried about people 'missing the point' if they work too much in the open, and only the surface ideas spreading memetically, similar to what happened when Jobs visited Xerox PARC and only saw/internalized one of the three things (the GUI) that he was shown at Kay's Learning Research Group.
☝️ 1
j
I have been greatly inspired by Dynamicland and have spent the last couple years playing with these ideas (https://programmable.space). My thoughts on the Dynamicland system: The focus is on the objects and how they can interact and be understood, rather than the current implementation of code or the “papers and dots” Augmented Reality system. Important points not related to the code or RealTalk implementation: • Always on. Key to making the system more casual. • 1-to-1 relation between seeing an object in the world and it having influence on the world. This is key to understandability. • Use affordances of physical objects, rather than trying to solve them in code. The tangibility is important here, but using physical objects can also reduce the scope of what code has to do. For example, your code becomes simpler if you don’t have to worry about fitting modals, tabs, and substates into a single screen if you can separate them across many papers/programs. This also includes using the spatial relation of programs to influence control flow.
1
❤️ 5
That being said, there are some interesting things on the technical side that support the physical side: • Shared tuplespace is a good fit for ideas that are “the size of a room”. Data is pretty small and doesn’t need to run “at production scale” so it can be kept simpler and more understandable. • Pub/sub like system using “claim”/“wish” + “when X” is a nice and simple way for programs to communicate. • - The system bootstrapping itself is also interesting. There are a lot of AR-desk demos but being able to edit the system without having to switch back to a traditional computer GUI is cool.
c
@jhaip programmable.space has a lot of really cool ideas in it. Thanks for sharing!
1
s
I was less excited about it after visiting, but I still like the ideas behind it. Most of my disappointment was with implementation, which is maybe the wrong thing to critique in a research environment, but it's still something that matters. Specifically the overall latency of the system was disappointing, it was challenging to code or interact with any programs that felt immediate. When I was there they said latency would be addressed, but it's also a hard problem given the implementation of the system. The other thing I didn't like so much was that claim\wish\when system wasn't well utilized in most of the tables they had in the space. That is, usually a single page or two felt like a self contained application, and the dream of a loosely coupled dynamic play space that required little coding to compose interesting and useful behaviors wasn't realized. There's no reason I could see that they couldn't support it, and some of the debugging tools gave a glimpse of that, but it was really hard to pick up a random page from one table, bring it across that space and have it do something interesting (or at all) with other pages. Designing a semi-shared set of common claims (outside of system level ones to access lower level drawing, audio, etc. functionality) would have made it more interesting I think. My favorite demos were things that worked around these limitations and also showed something that would be tedious to do outside of the environment. Like the demo where you could draw multiple sketches and have them automatically animated on another page. Latency didn't matter, because you got much more immediate feedback than if you did this with a camera or a scanner in a traditional computing environment. Another one I liked was a simple book (like the equivalent of a CD album booklet) where you could turn the page and play different songs, with text with the song and artist name. In some ways the projector system might be a good visual equivalent to an audio home assistant (Alexa, Google Assistant, etc.) I really wanted to see more, here are a bunch of building block pages, compose them however you like. I think back to Nicky Cases concept for visual programming (below) in Dynamicland, and although I don't think I just want that, the actual act of programming in dynamicland was writing lua scripts on an extremely text laggy editor, with poor error\compile feedback, and I would have liked to build things with the same ease and in the same natural way as interacting with some of the demos
❤️ 3
👍 2
n
@Scott Anderson Thanks for your impressions. Just to situate them, would you be able to say how long ago you visited? From what I understand the space is constantly undergoing change, so it could be that some of the physical implementation issues were fixed in later iterations.
The only detailed description I've seen online is Omar's geokit post, and that seems closer to the ideal of independent, loosely coupled play space (especially the ability to leverage other physical objects like the dial to interact with the maps)
s
It was in March of 2019 with a bunch of other current and former members of this Slack, I thought it was longer than that but it was just 2020 wrapping my sense of time :)
😢 1
Geokit is still one of the few detailed descriptions online
There are a few others I believe
But nothing as comprehensive
n
and his followup post on using raspberry pi's as the IO for DynamicLand
s
there's this but it's not very technical, more about the history of Dynamicland and it's creation https://tashian.com/articles/dynamicland/
There is some info about pieces of Dynamicland that preceded it, like natural language datalog and La Tabla
n
Yep, I've seen those.
s
@Glen Chiacchieri talking about Laser socks got me excited about the space early on, this was years before my visit http://glench.com/LaserSocks/
n
The tashian article is interesting when talking about the funding aspect. I've often wondered how much funding CDG would need per year - Alan's talked about how he would 'buy' the future to be able to build/research it in the present.
s
but the Geokit blog is the only comprehensive public write up about how the system actually works, specifically what its like to make a program for Dynamicland
g
@Naveen Michaud-Agrawal iirc they were looking for $60mm to be around indefinitely or something on the order of 1mm-500k/yr
v
The funding model is unsustainable. My advice to Bret was to use the creativity of the team to fund small kickstarter like projects. There are immediate applications of the technology to education especially childrens education. I even had a few venture capitalist friends (I know, thats an oxymoron :)) that suggested that funding for such initiatives could be arranged. I think its better to be in control of your own destiny than to be beholden to large corporate grants where the political climate change can cause those grants to vaporize in a heartbeat.
🤔 1
@Garth Goldwater when I spoke to them they were burning 150k a month I think.. the facility itself must be about 30k rent. The individual salaries for about 10 researchers would be about 2 MM a year fully loaded. So ballpark I assume around 2.5 M burn per year.
g
that sounds about right to me
that would be around 4% of 60mm
v
Operating cost per facility is likely to be 500k - 1MM per year. I think they wanted to replicate this model for large number of facilities. IMHO it would be better to miniaturize the model (raspberry PI + some tiny projector), hook it up to the cloud (so you can do realtime collaboration) and sell it for a bunch of use cases. They could even then fund the library model plus allow educational institutions to copy it for free.
g
@Vijay Chakravarthy I think I can tell you with some certainty that your ideas about funding are very off the mark for how Bret wants to run research
💯 3
I would bet that in Bret’s thinking “products/markets are where research goes to die”
s
They could have moved to a less expensive location, but yes researcher costs aren't going to be cheap no matter what
v
@Glen Chiacchieri I’m well aware of that. I’ve done both research and entrepreneurship - IMHO I feel that Dynamicland had potential not just to be a research facility but also to influence real product development.
“products/markets are where research goes to die” - and Bret would be wrong. I see a full reflection of the naivety that came along with Xerox labs and other places where people treated the commercialization of technology as something beneath them rather than a problem to solved.
g
@Vijay Chakravarthy Have you considered that perhaps their goals are different than yours? I don't think it's a naivety problem so much as different values than you. They are not interested in commercializing the technology because their goals are different — making a new medium of expression. Commercializing has a ton of drawbacks in terms of exploratory research which is obvious to anyone that's done this work. The plan with Dynamicland was always to tear down the system and build anew every few years to learn more about the new medium
👍 1
and do that for at least a decade
n
@jhaip really interesting work! I love how your recent explorations have moved away from projectors/computer vision to try out different models of imbuing computability into objects. I think a lot of people get stuck on that aspect of DynamicLand, whereas my feeling is the use of projectors/computer vision was the quickest way to test what it feels like to have physical objects with malleable computability instead of the difficult approach of custom embedded chips (and dealing with powering them, etc). I imagine a future incarnation of DynamicLand might use room-size inductive charging and a bunch of 32bit embedded processors with sufficient power to run a small VM along with other sensor modalities.
💯 1
I expect the end goal is to have the actual objects in the room host the computation instead of the current simulacrum.
🍿 3
v
@Glen Chiacchieri That wasn't my point, and has nothing to do with values. We have belabored this long enough - my only point was that some level of commercialization of technology prevents the dependency on corporate sponsorships, which are finicky by their very nature. Anyways, we can agree to disagree 🙂 and put this to bed.
n
@Vijay Chakravarthy although commercialization just changes who you are dependent on for funding (ie fickle consumers or businesses/enterprises)
but agreed, no point belaboring. I'm more interested in the systems aspects of DynamicLand
w
@Vijay Chakravarthy to be sure, commercialization has merits especially if one can maintain a nice, sustainable product independent of public trading and VC distortions. The thing with Dynamicland is that, rightly or wrongly, commercialization doesn't appear to be a goal: even a commitment to any particular tech. What then is their commitment? To a certain kind of computing: embodied, colaborative, flexible, playful.
v
Agreed on all points. Problem is that they in turn seek a commitment say from one of more patrons around funding. And funding to the tune of 2 mill plus a year with no external commitments is difficult to achieve. My advice to Bret was that the only way to achieve that would be to figure out some way to commercialize some parts of the technology (the more mature parts, say in tangible maps or kids education). Start with a hybrid patron plus commercial model and hopefully transition to a largely commercial model where the R&D spend can be much higher due to the non profit nature of the organization.
s
This kind of research is exactly what public taxes are for. The government (or multiple of them) should be funding this research — not private patrons. The E.U., USA, China, South Korea, and other nations — or the U.N. — fund this research, as an investment in a better computing future for all of their constituents and nations.