What are alternative solutions to variables and sc...
# thinking-together
m
What are alternative solutions to variables and scopes? Is there a proven abstraction that end users easily understand?
p
Brian Harvey used to say that dynamic scope is what you get if you don't think about scope, so that's why it is easier for beginners to understand. But it still involves variables and a type of scoping. Maybe I'm suffering from a lack of imagination here, but I'm not sure how to easily perform abstraction without giving things names. Names are what help us humans remember the meaning and usage of a thing or a behavior. If one tried to create some form of graphical language where things were displayed but could not be named, I feel like the first thing people would ask for is the ability to use names so they're not stuck thinking about this thing and that thing and the other thing. It did not take long for early programmers to invent so-called floating labels, allowing them to name pieces of code and data in memory. Even in spreadsheets, the ability to name cells and ranges makes formulas substantially easier to read. https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/abs/the-use-of-a-floating-address-system-for-orders-in-an-automatic-digital-computer/66DB2A4ACA578BB871B1B4A75352A6ED Outside of computers, imagine trying to tell someone how to make a sandwich without using any names.
j
I think we're bounded by human biology in what kind of scopes we can reason about. We're spatial creatures (2D/3D plus time). I'm not aware of alternative solutions, but there are variations with important differences in their relationship to end users. All scopes are essentially a set of nested spatial containers, but the spatial borders in traditional programming are functions and classes which is where the problems start for non-coders. The best example of spatial scopes that make sense for end users are spreadsheet rows and columns, which are much more natural. The variables "need a place to hang on the wall" in the end user's mind, and a function doesn't tick that box (it's a position in a text file, but essentially non-spatial). From here I guess the remaining directions are 3D scopes or Graph scopes which is essentially the input/output model seen with node/flow-based programming.
k
The alternative I am trying out myself in my Leibniz project (https://github.com/khinsen/leibniz-pharo) is no scopes, or if you prefer a single scope. To make this practical, code units must be kept very small, which actually helps to keep them understandable. That means: no "standard libraries" with tons of definitions that might one day be useful. Small bits of functionality must be explicitly included. The inspiration for this is mathematical notation in textbooks and research articles. They don't have scopes. Every bit of notation, once introduced, is valid for the whole text.
j
I've thought about creating a single-scope logic/"relational" language where "functions" are sets of rules about how the variables relate to one another. I guess it's not too terribly different from a database, per se, where variables are rows and rules are constraints. Moreso inspired by Prolog, only Prolog rules take explicit arguments. I honestly have no idea if this is a good idea, and in all probability it's probably a bad one.
c
Wikipedia titles are globally unique, they just put the scope in brackets afterwards e.g. "Franz Ferdinand (band)".
n
The PL I’m designing doesn’t have nested scopes. It’s a relational programming language (Datalog-inspired) — it’s the only paradigm I know of where such a thing is possible (with some hard work!). As a program gets large, the absence of a syntactic boundary (e.g. a file, or a code block) for limiting the places a definition can be accessed from becomes a problem. But I think it’s an easily solvable one. Variables, on the other hand, will remain essential for as long as humans use natural language.
k
@Chris Knott That looks more like an ad-hoc namespace than a scope to me.
t
the old programs were declare all your variable in advance and/or only have a single global scope which is extremely easy to understand with the negative drawback of not scaling to large programs or not handling temporary internal control flow variables very well (internal loop variables have to go to the top). Still, pretty good IMHO if you want fast understanding of a snippet of code.
c
@Konrad Hinsen yeah you are right. I think scope in the sense of actively restricting the ability to talk about something from another context is not user friendly. Chris Granger talks about this in one of his talks where he demoed Eve at a local event (at Dynamicland I think). There were lots of non-programmers there. They couldn't understand why you could point to a deeply nested variable on the screen, but not just pull that value out and use it where you want. I think the lack of scope in Excel (and autonaming of variables) is one of the reasons it is user friendly. It still has namespaces but you can refer to anything you can see (even across different files if you use a fully qualified path reference).
k
Smalltalk doesn't have scopes either. Namespaces, yes: a global one (class names etc.), one per class for instance variables, and one per method for local variables, which are not allowed to shadow instance variables. I can't remember anyone complaining about the lack of scopes in Smalltalk.
t
I find scopes to be a useful abstraction. Not necessarily for the initial creation of a program. I believe they resolve two issues: 1. Single-user error on text entry: You may not intend to use a particular variable in certain contexts. Scopes are a useful way to make sure that a typo doesn't result in unintentional usage. 2. Multi-user idea communication: When designing large systems it is useful to hide certain details of the system, especially if a particular use would largely result in errors. For example, if the use of a variable
i
is used multiple times in a single method to iterate through multiple lists it is useful that different `i`s in different scopes are associated with different lists. It communicates an idea to other developers that the "mental load" introduced by the variable need only relate to the matter at hand and can be ignored outside of that context. In a similar way to dynamic vs typed languages you can get away without scopes with a little bit of discipline. Encoding the restrictions seems a useful way to communicate intentions of the code though. In traditional implementations it really doesn't put much burden on the author as types can in some cases.
j
By default, definitions are unbounded, or bounded only by document. Redefinitions are bounded, and only explicitly. Redefinitions can be referred to outside the boundary of the definition but only explicitly. "1. Minister means the Minister of Health." "2. In this section, Minister means the Minister of Revenue." "3. The Minister, as that term is defined in section 2." Is that "scope" or "namespace"? I'm thinking namespace?
I would say that approach is proven.
c
The distinction to me is that something does not exist at all outside of its scope, whereas outside of its namespace it just goes by a different name. Scope is inherently confusing from an author who has an omniscient view of the program.
It can be useful when you are debugging in your head ("playing computer") because it reduces the amount of possible factors affecting the program, but this is fool's gold, the actual solution is to make the computer help with debugging, so people don't have to play computer in their head at all
p
@Konrad Hinsen local variables and arguments in Smalltalk blocks are lexically scooped. That's what makes it possible to implement conditionals and iteration by passing a block to a method.
t
It can be useful when you are debugging in your head ("playing computer") because it reduces the amount of possible factors affecting the program, but this is fool's gold, the actual solution is to make the computer help with debugging, so people don't have to play computer in their head at all
I disagree with this, specifically the bold part. In almost every scenario the goal should be to get feedback as early as possible. Ideally you can look at a program and know what it does just like you can look at text in a book and know what it says. In many large programs it is difficult to run all of the code through a debugger, sometimes taking double digit numbers of minutes. For example, major games take minutes to compile, run, and load into maps. There are certainly use cases where you can lean more on a debugger, like scripting. Even in these scenarios most developers prefer to be able to look at code and know what it does rather than have to run it through a debugger.
c
I think your concerns are about the current-of-programming, aren't they? Yeah, I mean forget minutes - when I last worked in the games industry a full compile had to be done overnight. This is bad. I would be wary of basing philosophical positions on that though. "If we adopt this language feature, compile times will be faster" is exactly the sort of tradeoff I'd classify as fool's gold.
p
@Konrad Hinsen thank you, I was unaware of that limitation/feature. I think I see your point now about how if you just forbid variables from ever being shadowed, programmers don't need to think about scope.
t
I think your concerns are about the current-of-programming, aren't they?
Yeah, I mean forget minutes - when I last worked in the games industry a full compile had to be done overnight. This is bad. I would be wary of basing philosophical positions on that though.
"If we adopt this language feature, compile times will be faster" is exactly the sort of tradeoff I'd classify as fool's gold.
I feel like this is putting words in my mouth. I'm making this argument for past, present and future: it was true, it is true, and it will continue to be true. Looking at something and knowing it works is better than having to take extra steps to find out if it works. Games are only used as an example. I've also done OS development where the same is true. I provided scripting as a counter example where maybe your argument is stronger: it's easier to run and debug scripts. I'd be curious if you have any realistic examples where people would prefer "[making] the computer help with debugging" over being able to "[debug] in your head" (I changed the gerunds in your quotes). I can't think of any. Seems like you always want to look at a program and know it works where possible and debugging only needs to come into the picture when that fails.
Its interesting if you take each argument to its extreme. I don't claim you are making one of these arguments but they are interesting to think about: • A language which is easy to "run in your head" but has no debugger. • A language which is hard to "run in your head" but has a great debugger. I think its clear people would prefer the first bullet in most contexts. Though obviously a powerful debugger is an incredible tool for building better programs. I don't mean to degrade debuggers or claim they aren't useful. Rather, I think its worth aspiring to improving what can be done in the compiler/interpreter input before considering improvements provided by a debugger. Truthfully many of the tradeoffs may simply come out in difficulty of implementation. If it takes weeks to implement a complier feature vs days to implement a debugger feature that prevents a similar error, its probably better to focus on the debugger. All else being equal though, I believe it is better to "verify things by looking at them" as I put it, even if human brains are lossy. The debugger comes in when the human brain fails... that doesn't mean the human brain should be replaced by it entirely though. The brain is what you are thinking with. Anything else, like a debugger, requires us using our much slower physical appendages to interact with.
j
Hard disagree. Programs are complicated. Even if you can understand small parts well by looking at them, you have no possibility of seeing the implications of how they interact once they are beyond toy size. I'll take your second option hands down.
c
I agree that it would be better to be able to do it in your head but I think it's impossible. Even the simplest things are already way beyond un-aided human brain processing power. Consider the Mario example from Inventing on Principle

https://youtu.be/PUv66718DII

(from ~13 min, specifically the feature demoed from 13:55). It's basically just solving a quadratic equation but pretty much impossible (for me at least!) to do in your head. @Jason Morris's project is a "debugger" of sorts for Laws, something which are generally less complicated than computer programs.
t
Hard disagree. Programs are complicated. Even if you can understand small parts well by looking at them, you have no possibility of seeing the implications of how they interact once they are beyond toy size. I'll take your second option hands down.
How would even know how to write the program in the second bullet? A language could be so hard to use that its infeasible to get a program which is even debuggable. At least in the extreme case.
I agree that it would be better to be able to do it in your head but I think it's impossible.
I agree that its impossible in many situations. I even agree that small programs can be difficult to get right. You can't know its right until you run. But ideally you can get it as close to right as possible before running it so that debugging time is minimized. Again, I'm not against debuggers and all code written should be run and tested so you can verify it is correct. Its just that the previous claim is too extreme for me to agree with. It certainly isn't "fool's gold" to construct better models that people can "debug in [their] head":
It can be useful when you are debugging in your head ("playing computer") because it reduces the amount of possible factors affecting the program, but this is fool's gold.
j
There is no extreme case. People create languages that are harder to use, on purpose, for fun. Humans are weird like that.
t
There is for sake of argument 😛 but I agree its a weak argument. The reason its interesting to think about though is because it becomes clear that there is some limit on program understandability that is important. It is impossible to ignore the brain. There is not a limit on debuggers though. You don't need one. You could get by with printf and just running the whole program even if you don't want to.
c
To restate my point; people have been trying to create languages that are easier to write correctly for a long time, with comparatively little success, whereas less effort has been put into omniscient/time travel debugging/program visualisations etc. Perhaps there is a theoretical programming language possible that brings the power of the computer to 99% of people but I can't even conceive of what that would be like, whereas I can conceive of theoretical (but impossible-at-the-moment) tools which make programming more like building with your hands. Bret Victor's work has (faked) examples of these sort of tools
j
I think we disagree about which of these two options is "ignoring the brain". Brains are very good at using language, and very bad at internally modelling the behaviour of complex systems they can't observe.
t
To restate my point; people have been trying to create languages that are easier to write correctly for a long time, with comparatively little success, whereas less effort has been put into omniscient/time travel debugging/program visualisations etc.
I agree much more with this framing of the point. I really take issue with calling the effort "fools gold" though. Separately, I'm not so sure folks have had "little success". In the context of "a long time". I think folks have had a lot of success at first, but it slowed down considerably over time. I made a previous point about a tradeoff between verifying with "looking at a program" vs verifying with "debugging" and I think there is a valid argument that we've gotten all the low-hanging fruit from the first and underinvested in the second.
As a specific example I think structured programming had pretty considerable impact on understandability in ways that are more significant than similar debugging improvements made at the time... Its been awhile since we've gotten anything as impactful as structured programming though.
j
In looking vs. debugging, which is type safety?
t
Yeah, I was thinking that was missing from this discussion. It's interesting. Somewhere in between. There's almost three levels you want to consider things at: 1. How easy is it to understand “just looking” (human only) 2. How easy is it to understand with automated verification (machine only) 3. How easy is it to understand with a debugger (human and machine)
I'd even argue that some of the verification methods impose complications in program text that makes 1 harder. Complicated type systems can sometimes place a burden on the programmer.
j
I divide it primarily between things that seek to make errors impossible, and things that seek to make errors easier to discover, and things that make errors easier to diagnose, and things that make them easier to repair.
E.g. type safety, fuzzing, debugging, and clear syntax.
I find "impossible" and "easy to repair" to be usually mutually incompatible.
t
Yeah, I feel like this is true in many contexts. There is definitely a balance between them.