Can someone familiar with the concept of stratific...
# thinking-together
d
Can someone familiar with the concept of stratification in logic programming explain it to me, or point me to a good explanation? My textbook's explanation was a little dense for a noob like me (@rntz?). I see it's tied up with the idea of a level mapping, but I don't have a good intuition of what a level mapping is.
i
The simple explanation is that it's breaking one fixed point into multiple. So all the logic in stratum 1 must converge, before you move to stratum 2 which then converges, then stratum 3...
this makes certain logical fallacies well-formed
so you split all the rules up into strata and only apply those rules in their given stratum
d
That makes a certain amount of sense. Can you help me map that to this explanation?
I think I'm starting to get it... "The barber is the man who shaves all men who do not shave themselves" isn't stratified because it has a negative reference on the same "strata", but negative references are only allowed to lower strata. Is that about right?
๐Ÿ’ฏ 2
i
That's how I understand it, yeah ๐Ÿ™‚
d
Thanks!
Out of curiousity, where did you learn this stuff? Just reading papers? Or was there a book, etc. that you found helpful?
w
Probably from trying to get Eve to behave sensibly. ๐Ÿ˜‰
i
haha yep ๐Ÿ™‚ We read lots of papers, but working this stuff out with real programs is what actually taught us I think.
There's a lot that the math doesn't tell you if you're trying to implement one of these systems
e
Dear Daniel, i wouldn't spend too much time on pure logic. One of the famous math books of the 20th century was Whitehead & Russell's Principia Mathematica, and it used a custom symbology, which drove the printers crazy because this was early 1900's. If i recall correctly it takes them over 100 pages of complete gibberish to prove that 1 + 1 = 2. If you need to sleep that book is infallible. We all know that 1 + 1 is two but a proof of this is surprisingly difficult. And this is where the risk lies in studying problems that are invented by academics. Most of the great breakthroughs in science and math were from people trying to solve a real problem in the world, like the motions of the planets, which inspired Newton, Lagrange, or cracking the Enigma code by Turing's computers, and so many others. When people invent their own problems to solve you head dangerously into "how many angels dance on the head of a pin" territory which so plagued medieval thinkers who never looked out into the world. I remember seeing the first category theory book decades ago, and was intrigued by it, but that branch of mathematics is littered with problems that don't matter to anyone but the people getting degrees. I can't think of a language or a tool that i use on a regular basis in computers that was developed at a university since 1980's Modula-2/Oberon from my idol Prof. Wirth of ETH, or Icon from Griswold at Univ. of Arizona also around 1980.
๐Ÿคจ 1
Here is a nice article by one of the few people who can understand Principia Mathematica (and i would estimate fewer than 100 alive do):https://blog.stephenwolfram.com/2010/11/100-years-since-principia-mathematica/
k
@Edward de Jong / Beads Project A more recent example of a language/tool from academia that is actually useful in practice is Racket.
w
(Failed to hit return earlier. I blame Jony Ive.) Nothing gets an appendix written for a paper faster than going to the authors with questions while trying to implement what they've written.
r
@Daniel Hines "Negative references are allowed only to lower strata" is exactly right. The "level mapping" idea (which I haven't run across before) seems like it's just a formalization of this, where you assign numbers to predicates according to their stratum. You can also think in terms of dependency graphs between predicates instead of numbering strata explicitly, which I prefer. Let every predicate be a node, and make an edge from A to B if there's a rule with A in the head & B in the body. Call the edge "negative" if B appears negated. Then a graph is stratified if every cycle has only positive edges; put another way, cycles aren't allowed to have negated edges.
k
For the umpteenth time I share the kernel of Edward de Jong's concern -- and find the broad opinions extrapolated from it wildly inappropriate. Evolution is robust because it proceeds along many directions at once. I am constantly thankful that it takes all sorts to make a world, and that there are others blazing trails in directions I don't care to go. There are many well-known examples of ideas that seemed incredibly abstract and divorced from reality -- until they suddenly turned out to be useful in understanding our world. To name just one, I believe Gauss's remarkable theorem (https://en.wikipedia.org/wiki/Theorema_Egregium) was considered for almost a hundred years a classic case of how many angels can dance on the head of a pin -- until Einstein used it in General Relativity. (It's true that the massive increase in funding for the sciences over the last 100 years has created perverse incentives for bullshit academic activity. But I want to be careful not to throw the baby out with the bath-water. And I say this as a disgruntled ex-academic.) --- It is a common mis-phrasing to say that Principia "takes over 100 pages to prove that `1+1 = 2`". It takes 100 pages to show that a specific system of logic actually connects up with the numbers as others know and recognize them.
d
Regardless of the debate, I promise I asked on purely practical grounds. Moreover, the text I'm working with, Knowledge Representation, Reasoning, and the Design of Intelligent Agents: the Answer Set Approach has very practical ends in mind.
๐Ÿ‘ 1
The Clingo solver (https://github.com/potassco/clingo) seems another exception to @Edward de Jong / Beads Projectโ€™s rule: the Potassco group out of the University of Potsdam is an academic group, but their solver is scheduling Europe's trains and moving Amazon's warehouse robots.
๐Ÿ’ก 1
e
It isn't a rule; there is nothing stopping Academia from producing breakthroughs. It is just an observation that although the funding and number of computer science depts. has exploded since 1980 we have little to show for it. We have reached a point where many of the professors only talk to each other, and don't interact with industry. They communicate in journals that might cost $1000 a year to subscribe to, which means no regular person will ever see their work. There are lot of things wrong with academia. It's gotten pretty decadent, with administrators getting 7 figure salaries and flying in the school's private jet. Smaller schools are actually suffering, but the top ones are rolling in money. My good friend paul was at UC Berkeley in the 70's and the students protested and went on strike because the tuition was being raised $50 a quarter or something like that. Now many students are leaving with six figure debt, and here in the states we have a crisis brewing with over a trillion in student loans outstanding. Unfortunately, there seems to be a very weak connection at present between industry and academic computing.
๐Ÿ‘ 1
k
That is all true. But there's a difference between "I wouldn't spend time on academia" and "I wouldn't spend time on pure logic". If 90% of everything is crap, the number goes up to 99% or more for scifi and academia.