Do computers make a qualitative difference compare...
# thinking-together
j
Do computers make a qualitative difference compared to paper and a concentrated human mind? They certainly improve the speed at which we can perform certain mechanical tasks by automating them (e.g. calculations, simulations and rendering of their results, even some basic logical inference can be done automatically). So even though this all can be in principle performed manually, ain’t nobody got time for that :)) This is similar to how paper dramatically expands the working and long-term memory, hence augmenting mental capabilities. However, paper also expands qualitatively over mere speech by adding a second dimension for expression. This allows for not only planar, but also spatial objects to be represented trivially (they’re sometimes easier to represent than text). Also, it makes possible the use of maps and graphs, which are just impossible to imitate using just speech. Computers add a dimension for expression, namely time. One can create objects that change in time, and also dynamically change them while they’re changing. But this is where I’m not sure if it makes that much of a difference. In my opinion, there’s always a certain limit beyond which changing a dynamic object gets unbelievably difficult (the “walled-garden” or “programmer-didn’t-think-of-that” phenomenon). On paper, you might need to work with a static representation of a process, but you can manipulate it without restraints. Also from the same phenomenon, we get a limited compositionality/mobility of dynamic objects. This is not a problem with paper, where the mind is free to consider different mental and symbolic objects to interact. The last point might have something to do with the fact that whereas we know some threshold for literacy (at least limited to using paper and writing for personal thinking), and that is: 1. learn to read and write letters 2. learn to organize thoughts on paper 3. done (there’s always room for improvement, but this baseline still covers the qualitative advantages of paper) Compare that with “computing literacy” which not only doesn’t exist, but if it did, would be something like: 1. learn to use some basic programs and how programs work in general 2. learn to write basic procedural programs in some “nice” programming environment (= not complex systems, not parallel, but not necessarily in a procedural language; what the environment should look like is of concern to e.g. Jonathan Edwards) … so far so good, but when you hit the limit … 3. either wait for some programmer/company to expand the stuff you’re using, or … 4. learn to program in universal languages, on general-purpose platforms 5. rewrite the stuff you’re using, but somewhat better 6. done? (not really - your program is buggy / your program can’t interoperate with all the other stuff that’s out there which the old program could interoperate with / …) We know how unreal of a threshold numbers 4 and 5 are for the general public. If we instead restrict the threshold to 1 and 2, we don’t get the qualitative advantage of the computer. So the question is: If viewed as means to augment human intellect, do computers provide qualitative (not just quantitative) advantages over pen, paper and the human mind?
p
You may find this article from the original wiki interesting. https://wiki.c2.com/?IsAnythingBetterThanPaper One of the things Richard stallman discusses in the original Emacs paper is the gradual progression that a good programmable program can provide for users. At first they can just use the program. Then they can alter settings. Then they can add tiny one-line hooks to customize specific behaviors. Then they can move on to actually implementing larger pieces of functionality. At some point a user may realize that they are now programming, but it is difficult to identify the moment when they transitioned from being a user to being a programmer. https://dspace.mit.edu/handle/1721.1/5736 A similar progression can happen in many command line-based systems, such as with Unix. At first the user simply uses the commands the system provides. Then they create a few aliases for convenience. Then they create some very simple shell scripts that just run a couple commands. Then they create slightly more sophisticated shell scripts that process multiple files or contain a conditional. Then those scripts get longer, and at some point they are obviously programming, but again it is difficult to spot the moment when they shifted from just using the software to programming, because the on ramp is so gradual. For many things, such as calculations or database management, computers "just" allow us to perform the same actions at a greater scale, but at some point, when one is looking at multiple orders of magnitude difference in a quantitative measurement, it creates a qualitative difference in the types of activities that are possible. From a communications perspective, the amazing thing a computer can do that paper cannot is respond to the reader. If I create a spreadsheet modeling my projections for income and expenses next year and email it to you, you can then adjust the spreadsheet numbers and watch what happens within my model. With just paper, you need to understand my model well enough to perform the calculations yourself before you can experiment with alternative inputs.
a
"Augment human intellect" is rather specific compared to the space of things you might compare to "paper and a concentrated human mind", but still not specific enough IMO to support a definite answer. It will depend on the problem to be solved. And the problem-solving style of the person involved. There are some problems where being able to rapidly run variations on a model (I'm thinking of my budget expressed, honestly, currently expressed as a Python program), without context switching between arithmetic and conceptual thought, is handy. But doing the arithmetic by hand can also be helpful. Similar for the general case of solving program-shaped problems, there are insights to be had both in writing the program/model, running it and examining the results, and doing the work manually. Practically IMO, the most useful thing about computers for what you might call "augmentation" is their memory, not so much computation per se. Hence all the myriad notes/personal database apps, digital whiteboards, etc.
j
I re-invented calculus from scratch when I was eight years old so that things would "fall right" in the games/simulations I was writing on an 8-bit microcomputer. If a computer can allow a child to do something that required a lifetime's work from Newton, it's probably safe to say they are able to augment human intellect.
j
@Jack Rusher How modest… What exactly did you discover?
r
Alan Kay wrote a great essay precisely about this somewhat recently (http://www.vpri.org/pdf/future_of_reading.pdf). It's especially relevant, I feel, with the arrival of LLMs - his example of "Socrates in a computer" seems much closer. Hopefully they can be put to use to make computer literacy more attainable, much as phonetic alphabets did for traditional literacy. That's not to say that AI will solve all of our problems, or even most of them. End-user programming still requires a basic model of computation even if just to direct an agent. The fragmented ecosystem is tiresome to navigate and plumb together even for professionals. Actually running applications in the era of Web software requires too much effort, especially when security is taken into account. Yet it provides an invaluable tool to help solve all of these issues, especially if its cost falls dramatically.
p
@Jan Ruzicka I'm guessing he discovered the concepts of constant acceleration, linear velocity, and quadratic position. In the video Squeakers, Alan Kay comments on students experimenting with these concepts with virtual cars and falling objects, saying something to the effect of "your students understand second order differential equations."
w
And I imagine students come to understand the importance of some kind of dampening pretty quick, which you aren't going to get from an introductory mechanics class.
j
Indeed, it was the same set of things Kay writes about and @Personal Dynamic Media enumerated. The feedback loop from experiment to observation to understanding is so much faster in a computational medium that you can learn (and thus do) things that would otherwise be extremely difficult/impossible. One of the reasons I champion interactive programming to anyone who will listen is that I've found this to be true throughout my life in a wide variety of situations.
j
@Jack Rusher @Personal Dynamic Media I suspected that, too, but was curious to see if this was indeed the perfect example of Dunning-Kruger. A second-order difference equation is nowhere near "re-invented calculus from scratch". (Worth noting that calculus didn't take Newton a lifetime, maybe read up on it?) Also, simulating free fall in homogeneous field does not make you "understand differential equations". @wtaysom Actually, you do learn about dampening in an intro class (even in an intro class for pure mathematicians). You also learn about resonance (damped and driven oscillator), which I don't imagine you can discover by playing around, due to it being dependent on a critical value of a combination of the input parameters (a certain multiple of the driving frequency, dampening rate and intrinsic frequency). Despite what e.g. Bret Victor says, I'm not convinced that much understanding can come from simulations alone, without actually studying the models (equations, which are much nicer to study than computer code).
j
Well, we at least agree that there's some Dunning-Kruger going on in this conversation 😉
g
Computers can be bicycles for the mind, but, this is currently discouraged by programming notation. Notation and language affects the way you allow yourself to think. I would say that computers are a new medium for expression in 4 dimensions x/y/z/t. IMO, this medium has yet to be explored in depth. Text-based functional notation, e.g.
f(x)
or
f(x,y,z)
addresses only one use-case for computers - computers as sophisticated calculators - but cannot easily express other uses of computers, e.g. sequencers (time, history), IoT, robotics, internet, blockchain, gaming, animation, etc. Obviously, we CAN express these other concepts in the current notation, but, programmers are encouraged - by exclusive use of this single notation - to think in 2D and to create calculators. When all you’ve got is a single notation for describing calculators, everything looks like a calculator. A calculator takes one input (which might look like several inputs, thanks to the miracle of destructuring) and produces one output. The calculator model is so insufficient for expressing programs, that a bag has been added onto the side of the model, called ‘exceptions’. If you want to build YAC (Yet Another Calculator), the current notation is appropriate. If you want to build a sequencer, switch to another notation(s). When starting a new project, it is unnecessary to jump to the premature conclusion that the project is YAC until you’ve savoured all of the project’s details.
j
Maps were likely passed down orally before paper was available. Songlines is an example of oral maps: https://arxiv.org/pdf/1404.2361.pdf Is paper then a qualitative or quantitative difference compared to speech and oral tradition? At what point does orders of magnitude turn quantitative improvements into qualitative?
g
Observation: • paper notation serves 2 purposes 1. communicate (e.g. maps, prose) 2. save thoughts for later introspection and manipulation (e.g. mathematics) • speech serves only 1 purpose: 3. communicate 4. thoughts cannot be easily saved and tend to dissolve over time Observation: • AI speech-to-text replaces graphite+rubber for prose • AI speech-to-text saves thoughts for later introspection • Computer graphics+animation saves thoughts and enables rapid introspection in greater-than-2D form - “new medium” for expression?
a
Note: Speech-to-text being suitable for any purpose whatever, much less replacing handwriting, is extremely variable between people. I'm not at all comfortable drafting out loud; I anticipate keeping my pencils and keyboards my whole life no matter how good STT gets. This matches what I've heard from some of my writer friends when the topic comes up.
g
aside: speech-to-text is causing me to augment (not replace) my text writing with things I would not have considered doing, e.g. making YouTube videos. I am saddled with a MacBook, but I don’t bother to use Apple Dictation. ATM, I use Descript in place of a video editor. Descript transcribes the audio to text and then lets me edit the video (and audio) using a word processor, i.e. with my fingers and a keyboard instead of with voice, i.e. with word-processing commands instead of timelines and scrubbing. I use Just Press Record on my iPhone to occasionally record 1-liner reminders/content on a device which has an otherwise hostile HCI (from a developer’s perspective). The recording is transcribed to text and then, magically, shows up on my MacBook, ready for cutting and pasting into a text document, editable using fingers and a keyboard. I tried OBS and iMovie and da Vinci and did not find them friendly. I used Logic for editing song demos and am glad to abandon it.
a
Interesting idea re editing video by moving around the corresponding sections of text, if I understood that right (or even if it didn't I guess ;P).