GraphQL seems to be a more expressive/high level q...
# thinking-together
d
GraphQL seems to be a more expressive/high level query language than SQL (and things like relationships, sorting, filtering, are left up to the interpretation implementation) so I'd imagine something like that might be more likely. I think with the right type representations things like program synthesis can at least stumble upon valid wirings, and the problem gets shifted back to the specification of desired outcomes
i
I'd point to Datalog and Datomic as interesting touchstones. CSS is also very interesting as a query language — it's declarative, high-level, and at an interesting place in terms of abstraction vs concretion. There's a lot to be learned from comparing and contrasting all of these different approaches to query. What follows will be some off-the-cuff conjecture. Caveat emptor. The thing to look at, between SQL and GraphQL/Falcor and Datomic/Datalog, is what underlying principles inform or support the design of the language model. SQL is strongly rooted in the relational algebra, and so as a result you get all the fantastic properties of set theory. Datomic/Datalog are designed around the mechanics of predicate logic, so your queries are logical expressions fed to a constraint solver. GraphQL/Falcor/etc are not rooted in set theory. They're not really rooted in any underlying logic, one could argue. They were designed to suit the structure of React, and not to surface the power of set theory (or any other algebra) in a programmable way. The fact that GraphQL leaves relationships, sorting, filtering, etc up to the implementation is a symptom of a missing underlying theory. What would be nice to see is someone taking the lessons of GraphQL, like the power of allowing for partial evaluation of a query, and applying this to a query language engine with a richer set of underlying semantics. There was a tiny bit of this attempted by David Nolen with Om.next, but it never got off the runway.
🤔 4
👍 1
💯 1
p
Thanks for the pointer to Om.next. I’m also inspired by Datomic, and conceive of an application framework in some similar ways. Do you know why the project stalled?
i
At the time, David Nolen was also the main maintainer of ClojureScript during a phase where it was improving rapidly, and he was working as part of the Datomic team. So I think Om.next just wasn't enough of a priority for him. He gave a few excellent talks describing the design goals, and then never ended up realizing them. Following that, re-frame burst onto the scene and (fairly, I'd say) stole the spotlight, and enthusiasm around om.next faded away.
👍 2
s
Some aspects that are key to 'get right' here from my perspective are: - Eliminating the database idea. One reason is getting rid of the impedance mismatch between 'database schema' and 'application programming languages'. The db has it's own schema language and is considered distinct from the other processes in the system. Middle tiers and web/phone clients map exactly the same data into their own little in-memory schemas (classes, dicts, ...) with different schema languages - a whole bunch of repeated definitions with slightly different shapes and slices of the 'greater schema'. Doing this also makes syncing and cache consistency as problem to be solved separately. We can eliminate the notion of database itself, replace it with the idea of a global conceptual data model which is stored in a meta model and available system-wide to all processes. - Clear separation of the conceptual model from the implementation details. The in-memory data and messages on the wire may at any time be only different, small slices of the conceptual model. Where some computation happens should be independent of what computation happens. - Deep versioning, designed for incrementality and change (Peter mentioned this already) - no schema has only one version forever and any system that doesn't support versions as a first class concept will just mean we have to solve the problem outside the system and that doesn't work too well. With versions as first class entities in the meta model, all instances of data or objects belong not just to a class but to a class@version. When you have class@version2, the in memory or persisted objects remain members of the older version until upgraded. - Persistence is orthogonal. Once the db is gone, do we only have in-memory objects? Of course not, we want to attach persistence annotations on our conceptual model that would define which objects are persisted, with what durability, reliability and what kind of system wide consistency. While neither Datomic nor SQL do all of the above, I do like (some) ideas in them. The relational set theory seems powerful and rich (though the SQL with exposed join keys seems yucky because that's an implementation detail). And the 'append only' idea in Datomic seems right. Many of the ideas above are very nicely elaborated on in a book I am reading and really enjoying (perhaps because it resonates so well with my thoughts) - Vertically Integrated Architectures by Jos Jong. I haven't finished the book yet but I would recommend it to anyone interested in this space - it argues well for a change to some deeply entrenched ideas.
👀 1
👍 4
j
Thanks for the pointer to the Jong book @shalabh, I’m about 1/3 through it and I’m finding it provocative. Certainly at my day job we keep coming back to the frustrations of how much of our labor is essentially data model plumbing, and debates about raw entity queries vs. “rendering” APIs, and this book seems to be offering a different vantage point on that. Also any book with a section called How and Why JSON Conquered was bound to charm me.
👍 1