Hi <@UBKNXPBAB>: I was asking <@U5TCAFTD3> about h...
# thinking-together
n
Hi @Joshua Horowitz: I was asking @stevekrouse about how dynamicland handles accessibility issues and he suggested that I ask you directly. I’m curious, I’ve spent some time thinking about approaches I’d take, but I’m wondering what you’ve (dynamicland) done as I haven’t seen any info regarding that.
c
Hi Niluka. I'm a volunteer at Dynamicland, maybe I can answer. Josh may know more about the vision for it, but at the moment RealTalk doesn't have any explicit accessibility features that I've seen. Given that it is a tangible environment, and given what a huge area accessibility covers, I think there's a lot of possibilities to explore there.
n
Hey @carl, ok, I’m assuming as you’re primarily focused on the research side it’s not a priority then? I was thinking of the fact that the camera/projection system should have a good representation of the environment, so there’s some pretty good options for taking a “view” of what the system sees and projecting it onto a screen of some kind which can be manipulated with more traditional windowing methods as well as be readable by things like screen readers? This should allow users with various accessibility issues to interact with other users with minimal problems? It’s not a magic solution, things that are directly tied to physical objects like dynamic objects that take their cues from paper or the location of other items won’t be easily manipulatable, but it should help a lot I think?
c
Certainly some things could be read by screen readers, although it would be more like a “page reader” because inside RealTalk there is no “screen.” (Although there are displays in the space that, for transparency reasons, show the table or wall of pages that the camera sees.)
One of the big design values of the project is the direct manipulation of physical objects (pages, felt tokens, etc.) that are given computational properties. So, I think there’s a lot that could be done there for accessibility. RealTalk doesn’t have speech synthesis features that I know of, so adding those would probably be a good starting point.
n
Sorry, slack didn’t give me a notification that you’d replied to me >_<… @carl I’m aware that realtalk doesn’t have screens per-say, but the system does need to have a virtual representation of the objects it projects into physical space right? I was thinking that that representation could be provided to people with accessibility issues that limited their ability to interact with the “physical” part of the system. So for example take someone with mobility issues or difficulty seeing objects, you can still give them a modifiable view into the representation of the system, they too can for example be part of a group collaborating with geokit, following along as people point things out/presenting and changing the values on their copy of the map, which others can then and interact with. You have an environment where it’s possible for people with differing capabilities to naturally collaborate as first class citizens in a physical medium, which I think is in line with dynamicland’s ethos of a shared, collaborative, computing experience :)…
Although I can completely understand if this isn’t really a priority as the focus is still about exploring this new computing medium…
c
@Niluka Satharasinghe Yeah I love that example. I think exploring a new medium is the perfect place for accessibility—and one of the 3 big design values or principles for Dynamicland is something like, “a humane medium accessible to all.” It will be interesting to see where the researchers go with it next.
n
Definitely excited about that =)…
@carl Not sure if you’re still around and connected to dynamicland, but I’m visiting the 24th of July and the 7th of Aug, so if you’re around and available, would love to have a quick meet and chat about it!