Just to declare my own work: I'm working on a gene...
# of-logic-programming
n
Just to declare my own work: I'm working on a general-purpose logic programming system. My fundamental goal (life purpose?) is to remove as many unnecessary concerns on the path from ideas to implementation as I can identify, and I think logic programming offers hope for that, specifically because it's the simplest (only?) programming paradigm that doesn't immediately introduce unnecessary concerns like "call stacks" and hierarchical "data structures". Why am I using those innocent-looking concepts as examples? Because they induce a plethora of ancillary concepts and architectural patterns that have nothing to do with the problem domain itself. Call stacks demand value-shuffling, and data structures demand traversal code. There's a lot more to say, but that's a glimpse of my headspace.
❤️ 1
👌 4
c
Wow, cool Nick.
w
True that. Unnecessary concerns! Feel this proved Prolog's greatest weakness. If the order of clauses matters, you're probably in for a bad time.
d
Suppose that you are doing computer graphics, and you need to use linear algebra, vectors and matrices. Because that's an essential part of the problem domain. I have similar concerns about machine learning. How do you do vectors and matrices and N-dimensional arrays without data structures? Does this kind of logic programming only work in domains for which you don't need data structures?
n
Data structures are possible under my programming model, just not pointer-based ones (which I was alluding to by the term “hierarchical”). The problem with pointer-based data structures is that they demand certain traversal schemes: walking down the hierarchy from the “top-level pointer”. There’s no way to jump immediately to a node of interest, unless you hold a pointer to an interior node, but by relying on such a pointer you lose the ability to backtrack to the “earlier” nodes in the structure, and interior pointers are problematic in the presence of mutation. You can abstract over the boilerplate of certain kinds of traversals with a “map” or a “fold” function, but those functions are themselves boilerplate for every new data structure you introduce. You might start proposing generic, auto-derived “recursion schemes” for traversing recursively-defined data structures, but following this idea to its conclusion essentially gets you to logic programming with fixed-point semantics.
A “compound value” in my programming model is a tuple where some of its elements have been designated as representing its identity. The machine representation of those elements is not visible to the user, but conceptually they are “connections” to other tuples (which in turn, will have their own identity). Connections do not act like pointers, because a connection cannot “own” the tuples on either side. Every tuple is globally accessible via queries upon the “relations” (tuple sets) to which it belongs. Thus you don’t “walk along” connections, but you can ask for the set of tuples which appear on either side of a certain class of connection.
d
The way I understand it, in order to represent an NxM matrix (of numbers) in a relational language, I would need to create NxM 4-tuples. Each 4-tuple would contain a globally unique ID representing that particular matrix, a row coordinate, a column coordinate, and a number. That's 4x the memory requirement of the usual matrix representation (where the numbers would be stored in contiguous memory). Matrix multiplication would be very slow. With a normal matrix data structure, you can retrieve a row from a 4x4 matrix by loading a single cache line, but to retrieve the 4 numbers in a row from the tuple store would require significant computation and memory bandwidth. This problem becomes much worse for image manipulation, where each pixel in a photograph is represented by a 6-tuple {guid, x, y, r, g, b}. The naive representation that I'm describing would be too slow and memory inefficient to be practical. So my impression has been that pure relational languages must be a lot less general purpose than conventional languages, which can support efficient multi-dimensional arrays quite easily. To fix the problem, it appears that you would need to extend a relational language with support for data structures.
n
There's nothing about relational programming that demands a particular machine representation of information. In my explorations, I find it helpful to completely separate language features from possible machine representations. So there are two questions here: 1. What could the programming interface for matrices look like in my language? What language constructs could be used to define them? 2. How are these particular language constructs compiled to machine code? I'm actually still working out the semantics of sequential and grid-like data, so I don't have a good answer yet. However, I aim for the language constructs to ultimately compile to the same machine code that conventional languages compile to.
❤️ 1