How accessibility relates to Foc
# thinking-together
f
One of the things I am curious about is how a11y relates to the future of coding. So far it doesn't seem to come up in the discussions on the podcast but I think there is something to be said about the advancement in a11y in computing over the last 25 years and a large part of this is due to a movement away from some of the more creative interface designs (ala Magic Ink). Is there a future of code that is more accessible to people with different abilities?
c
Super important part of the FOC, see some recent discussions below. Also check out Amy’s explorations linked in both! https://futureofcoding.slack.com/archives/C5U3SEW6A/p1695860749288399 https://futureofcoding.slack.com/archives/C5U3SEW6A/p1696137050571599
f
Thank you!
c
Also relevant are the explorations around non-English computation https://futureofcoding.slack.com/archives/C5U3SEW6A/p1694840398848199
d
I am pretty conflicted in this area, as one of the main aspects of my grand plan is a UX or HX (human experience) built around a shared space, which inevitably means 3D for me. But of course, people with reduced abilities around vision won't be able to access that as well as most. Of course, they still grasp the concept of a space, and there will be ways to allow them to enter, "see" what's there, and interact. But I spend a LOT of time planning the 3D stuff, and there's a lot of code around it.
It heads naturally towards AR and VR: which are very immersively 3D of course. Now, those with partial or no sight still belong in the real 3D world, but this means they have to have i/o that works for them: so probably sound, maybe touch and motion.
e
Nothing profound to say here, but wanted to raise my hand and let ya know I spend literally all day thinking about this everyday 😂 Always glad to chat about this aspect of the future of coding.
When I think about this, I try to approach accessibility as something that can be baked into the design of a thing, and not made to be just about I/O and UI -- while those are specific facets of accessibility, I think there is work to be done about creating pits of success that our foundational to a thing that lead to more accessible experiences for all.
d
@Eli Mellen need examples!
g
Here's a random idea: how about using LLMs to describe a UI to visually impaired readers? I've seen LLMs used for summarization in other contexts. Could it be a useful alternative to screen readers?
e
> Could it be a useful alternative to screen readers? In short, and sort of blunt terms, not really
In slightly longer terms, yes-ish. What an LLM describing an interface made exclusively for a sighted person to use would accomplish is insert an extra layer of mediation between user and interface
screen readers don’t reproduce a UI 1:1, instead they decompose it into a navigable tree, sort of like a DOM, but broken out by semantic (and ideally) interactive elements
I think that if an LLM could describe an interface, but I think it would be tricky not to sort of re-create a screen reader in an LLM if you want it to be useable, if that makes sense?
@Duncan Cragg I think one of the only IRL examples I’ve seen in the future of coding space is @Amy Ko’s work with wordplay.dev, where, at least judging by the demo, it assumes various accessible outputs as default, it isn’t something the dev has to layer on
g
I've used screen readers on an accessibility project in the education domain. They are maddening, but I suppose a visually impaired reader may get used to them. I suggest that screen readers are less than ideal for certain information displays: graphs, tabular data, others? An LLM in certain contexts may provide a more accessible description: "Here is a graph with time on the x access and population on the y axis. The y access ranges from zero to 1 million in increments of one hundred thousand..."
One could also imagine an interaction with the LLM: "Tell me what the population value is for the year 1910?"...etc.
e
100% — as caveat this is what I do for work as an accessibility specialist for the US gov. — what I’ve seen in the research is that often times folks who rely on screen readers as their primary interface aren’t generally interested in interacting with a graph. Folks often propose audio solutions to graphs when, what most folks end up wanting is just a sentence describing it, so, an LLM that could do that would be pretty slick — maybe not even describe the graph, but sort of summarize it: “32% of respondents agree with the question” and then maybe an interface to inquire deeper
g
Yes, exactly what I am getting at. Something to provide an overview to orient the user.
e
@Greg Bylenok I came across this blog post that is interestingly connected to this discussion!
f
A related question I have about this is that I can imagine using a screen reader to write code being mind-numbingly awful, Christ Krycho talks in his "New Rustacean" podcast about one of the hardest things about it was explaining code with just his voice (for example "then type let x equals some struct colon colon new open paren close paren") is not very helpful or engaging. Does that mean the future of code is something more like literate programming?
e
Does that mean the future of code is something more like literate programming?
I dig this question so hard
f
Also relevant are the explorations around non-English computation
I always appreciated that Swift went with pure UTF-8 identifiers, that doesn't solve the keyword problem or even other language context concepts (RTL/LTR, or the existence or meaning of sigils)
Previously I didn't really see a lot of benefit to the Unison before but I wonder if there is a place for its "this is not text" approach to definitive source code?
a
We release Wordplay beta next Tuesday. It’s really only a glimpse of what’s needed for accessible, language-inclusive programming, but I’m looking forward to everyone’s thoughts!
f
I finally got a chance to watch the webinar and I am very excited about the work you are doing Amy. Thank you