A question for ya’ll doing thinking on future of c...
# thinking-together
e
A question for ya’ll doing thinking on future of coding things: When/if you think about the accessibility (read here as “a11y”) of the future of coding do you consider accessibility as an attribute of the folks using your thing (e.g. a need) or an attribute of your design (e.g. a feature)?
follow up question: does that distinction matter?
s
I think there's actually a trichotomy on that term that is worth exploring: 1) There's the industry standard, like "a11y", which I would say is basically always great to have correctly implemented on any production app; generally if the tool can be consistently navigated/manipulated keyboard-only and readable by assistive technology, it is a good sign the UI architecture is sound; 2) There's the interpretation you mentioned that would be designing with a particular user need in mind, like for instance, one can literally blindfold himself and optimize the tool for that particular usage context; that would quite surely make the tool "more accessible"; also, clearly great powers lie within optimizing for all human senses; 3) There's a third, quite literal, interpretation of "accessibility", as simply "having access at all to something". I personally have used that in the context of "live/visual programming can make technology more accessible", even simply because of ease of use, and mobile, etc. for instance.
e
I like that break down a lot! Sort of 3 facets of one thing, but all different. I’m imagining a radar chart mapping them, now!
k
This seems like one of those many places where a noun obscures where an adjective would clarify. Tools are for folks, folks have abilities. How accessible a tool is for someone depends on fit. Not very satisfying, but at least phrasing it this way is in principle answerable in a concrete case. But "this tool has accessibility"? Or "this tool is for accessibility people"? Neither seems like a meaningful statement.
j
Kinda off to the side: I’d say that when making production stuff I take the kind of accessibility you’re talking about quite seriously, but when doing more research-y stuff I leave it aside during the initial phases because it’s so much harder to try new ideas while carrying the full weight of past standards.
e
but when doing more research-y stuff I leave it aside during the initial phases because it’s so much harder to try new ideas while carrying the full weight of past standards.
Do you think this is always true? (showing my cards my whole thing and job revolves around accessibility and sort of trying to advocate for it to be a core part of the design/considerations for stuff...not just cream cheese applied at the end, or blueberries shoved into an already baked muffin.) As example of how it could contribute to the design of a thing: • say you need some metric to track the impact of enhancing accessibility? • an easy(ish) way to track this impact is to track completions wherein you define some set of tasks that can be completed within a system and then track what % of folks start the thing vs finish it • leveraging accessibility-first design when approaching that situation empowers you to start thinking about "escape hatches," and ways to optimize for completion. Whereas if you just try to make it accessible at the end you'd remediate any keyboard traps you accidentally made, maybe with this approach you would ask "if someone were to get trapped at this step what would they do? what way out and through can we provide even if trapped?"
j
How I mean this is that if I want to experiment with, say, AR goggles, I’m not going to start from the position of “how will blind people use this display device?” Or if I want to build a two-handed input device, I’m not going to start from “how would a one-handed person use this?”
That said, there are many situations in more established input/output modalities (touchscreens, for example) where taking design cues from accessibility research improves the interface for everyone, which I think is broadly under-appreciated.
e
Word. That makes sense. Again, sort of showing my bias, I think across tech “accessibility” is often framed as a matter of input methodology when it could be framed as a core design principle, akin to color theory. I’ve found folks are more readily able to accept this idea when you don’t frame it as acceasiibility, but instead approach it like service design.
I am excited about seeing what apple does with AR/VR because I think desktop computing mostly ignore the human body (for both good and ill), phones and mobile devices started to force consideration of the body, and I think ar/vr could do some interesting stuff to ground computation into the body.
That said…all the talking and demos and reading I’ve done has me thinking that’s still a ways off
a
metadata-rich systems are well positioned for the kinds of introspection that make a11y easier…. e.g. this widget isn’t just a text box with a label, but a projection into UX space of a field object in a data dictionary somewhere. But metadata-richness usually leads to very poor legibility--every use case is an encoding in some lower-level language
s
I think we need a name for this more general design principle @Eli Mellen is trying to entail. I particularly like the tie to input (methods and contexts), because it makes it very tangible, and the emergent benefits can be often surprising (mostly thinking about programming interfaces here): 1) If my interface can be completely controllable/readable by voice, then any person (not only people with impaired vision for any reason) could use it in many arbitrary situations; I can program while cooking; 2) Similarly, if my system can be controlled completely by hand gestures, then I don't have to speak, or even touch the keyboard/mouse/touchpad; I can then build something very casually, like a game, using muscle memory alone; 3) Interesting research can be made on the ergonomics of interfaces considering arbitrary handicaps "Single Handed Programming", "Single Finger Coding", "Software Bending" (Programming using body gestures, which doesn't even exist yet), and their many design challenges' particularities; 4) The fact that one can program on the phone at all is a very sophisticated accessibility feat already, but (at least) that front gets talked about around as "Mobile First" design; still a very rare feat in current software ecosystem, especially for coding; 5) Repeat that for many devices (what about programming only with a joystick?) or body/mental state conditions (I have very little/high energy right now...) or environmental (would this tool be more helpful to code underwater?)... etc... etc... These are a few examples of this unnamed phenomenon. I will try to compress it: "if you design with limitations in mind, the overall solution might improve". What is the name?
e
I’ve run into this kind of thing and seen it called stuff like • accessibility beyond compliance • inclusive design (adjacent to participatory design) • designing for resiliency • … But I don’t think they really capture what you are doing a great job of describing, @Samuel Timbó. I wonder about something like reclaiming Interaction Design, but with a broader scope of what interaction encompasses?