#language-design-philosophy A data structure that’...
# thinking-together
h
#language-design-philosophy A data structure that’s orthogonal at its core? Why is that so hard to think of? Why is there such a strong tradition in CS/PLD to think in composition by nesting but not of orthogonal composition? What if we (language designers, at least) are missing something here? Why do I post such an underspecified topic? I think there is little doubt that perceiving orthogonally is a fundamental feature of human cognition, and that current language design fails to support this. And, yes, I don’t even know how to argue for this. This may be frustrating to try to think of. I must admit; I’ve learned to appreciate the feeling that there does exist something, that it must exist, but I can only just almost try and grasp it. What do you do in similar mental configurations? Grab a whiteboard? Begin writing an essay? Write some code that assumes the
thing
exists in order to reach your intuition for it (“air coding”)? Your.. + thoughts on this experience + ideas for an attack on this problem .. are welcome! 🙂 ❤️
t
If by orthogonality you mean cross product kind of things, I think Concurrent Hierarchical State Machines / UML statecharts can represent orthogonality well with a closed form PARALLEL operator for two state machines. But this is orthogonal in the sense the two things are independant, the state of the system is the cross product. In the simplest form they do not interact, though it does have mechanisms for communicating. https://link.springer.com/content/pdf/10.1007/3-540-44929-9_24.pdf Is this the right lines of thinking?
h
@Tom Larkworthy You are reading my thoughts!
t
cool, so the root of these machines is the ragular language thingys. So maybe a regex can be considered a representation with orthogonality at its core too (you can concatenate and | regex expressions). I like HSM formalisms though because they play well with formal verification and somewhat resemble normal programming if you squint really hard.
❤️ 1
h
@Tom Larkworthy Yes, I think HSM takes state machines in a great direction! I am pondering the following way to perceive orthogonality in HSM’s. Traditionally we would look at is as the typical concrete representation invites:
{ a: { b: { c: ... } }, d: ..., e: ...}
. So I consider the idea that this is more concrete than it need be philosophically. Here is an attack: Let’s define two operators:
&
for combining values into and-states, and
:
for combining values into hierarchies. Then we write
a:b:c  & d & e
. Traditionally, the CS practitioner and programmer would ask for precedence rules or brackets to disambiguate the expression:
{(a:b:c), d, e}
or
{a:(b:c) , d , e }
. Here is the point where an alternative orthogonal model might be possible and might be valuable from a language-design point-of-view: In stead of nesting we lay out the values in a two-dimensional grid, e.g. drawing the
&
relation horizontally, and the
:
vertically upwards. Then, a noteworthy thing can be observed: The
a
value is now both an element in the and-state (horizontal relation) and the first element in the hierarchy (vertical relation).
This is my criteria for detecting of orthogonality, as I meant it in the OP. Wow thanks!, it was kind of cool to actually write it down. Follow-up question: So, while this evidently is drawable; does it map well to cognition, implementation? Is it view-point in the eye of the beholder, or does it open up them HSM model for e.g. generalization/deeper understanding?
t
My own experience is that HSMs are hard to program, but they are amenable to machine verification because their state space is enumerable. So thats useful, and in practice the paxos protocol is a runner for state machines, so they are used in technical domains for real (not sure about flavour of state machiens though, maybe not HSM). I also see them in realtime systems (https://www.state-machine.com/) again, coz they are complex enough to be useful, but simple enough to really figure out and trace symbolically. The QP frameworks people make a strong argument that HSM is necessary to avoid the state space explosion problem that basic state machines suffer from. This is directly because of the orthogonality. When you can express things as two orthogonal basis vectors you avoid having to flatten them into their non orthogonal space (the cross product). So yeah, I would say orthogonality has made state machine scale to the realtime systems domain. Of course, HSM are still less expressive than turing machines so they clearly are not used much when we have access to higher level languages and want to move fast and break things.
❤️ 1
This page expresses it much better and I think it is about orthogonality https://www.state-machine.com/fsm
❤️ 1
(yes I agree with your orthogonality picture, except that the H itself is an orthogonal, so we don't really need the concurrency bit for HSM to add orthogonality to FSMs). I think in this diagram the edges are dimension in your orthogonal space diagram
d
two, maybe three axes of orthogonality is nice for cognition. nesting is one way to manage more dimensions than that. seems to me you want to make it easy to navigate a variety of representations without losing context. https://www.quantamagazine.org/the-brain-maps-out-ideas-and-memories-like-spaces-20190114/
❤️ 2
h
So many interesting ideas! There may be many features that is prevalent in perception, but under-supported in the PL tradition for mundane reasons; e.g. Regular Expressions/ matching on pure text; I see at least two perceptual features that could be nice to support: Context/Refinement and Othogonality of Horizontal/Vertical directions (2D): Refinement of selections are described e.g. in Structural Regular Expressions by Rob Pike [ pdf ]. But I’m not aware of Regexp’s that do 2D matching. It’s pretty clear programmers make use of the vertical dimension e.g. by the tradition to align similar lines of code (pervasively done in Haskell and similar languages.)
a
The a value is now both an element in the and-state (horizontal relation) and the first element in the hierarchy (vertical relation). This is my criteria for detecting of orthogonality...
Is this not satisfied any time the stuff nested under a is fully contained inside it? I'm having trouble seeing this as anything other than a syntactic change. Nesting is still going to crop up in the semantics, or at least, it will cause a lot more trouble to get rid of it than you could possibly benefit from. Syntax should clearly express that. I do think things that don't explicitly depend on each other should be allowed to be going independent. For instance, finding all the natural concurrency in a program. I don't know if this is exactly what you're saying, but I think it's related. Somewhat aside: I've come to think of nesting and sequencing as being different sides of the same concept, namely dependency. I think this is supported by the way nesting-based encodings of sequences, like linked lists and fixed points, keep cropping up.
1