Losing the use of your hands or only half of your hands gives you the opportunity to think declaratively - what, not how.
teaser:
What is the point of programming? To control a machine. To accurately break down an action in steps so small that even a machine can perform the steps. Current electronic machines provide us with a bag of potential steps called “opcodes”.
QWERTY keyboards are favoured by programmers, because they allow programmers to supply very fine detail using 10 fingers each of which contain a lot of nerve endings and fine motor control. I’d argue that QWERTY keyboards are better than piano keyboards, because you don’t have to move your arms. The drawback is that current QWERTY keyboards are not velocity sensitive nor “wiggly” sensitive. aside: A mouse wastes 5 fingers plus an arm to operate.
I don’t know how to solve the HCI problem of entering and editing details in a computer, but I did have my hand in a cast for one week and began exploring speech-to-text. Speech can contain a lot of detail and speech-to-text can convert that detail into editable strings of characters. The subsequent editing can be done in broad strokes (up, down, highlight, delete, etc.). Maybe a solution lies transforming detailed speech into some other less-detailed domain, i.e. use more than one domain instead of a single text editor. Maybe the finger-based editor can be split into 2 parts - entering details, editing in broad strokes. ATM, I’m using Descript to replace Logic and iMovie to “write” papers and books (i.e. to document experience). Descript uses AI to suck detail out of speech, then converts it to a form that can be edited in broad strokes (a finger-and-mouse based word processor UI). It ain’t perfect, but is less painful to use than iMovie and Logic and OBS and etc. for what I want to do. Descript expects a user to use a QWERTY keyboard and a mouse, so this doesn’t directly solve your problem, yet is an example of a UX that treats data entry and editing as two separate technologies.