#thinking-together
Title
# thinking-together
j

Jason Morris

06/08/2022, 6:07 PM
Is this a known result in PL theory? "The set of stable minimum models returned in response to a query in a goal-directed stable model constraint answer set programming language entail, taken together, a stable model constraint answer set program for answering only that query."
I may be phrasing it wrong. Not a theorist. But this just occurred to me, and it seems intuitive. If true, it means I could let my users compile their encodings into lightning-fast code without sacrificing any reasoning capabilities for expert system applications. Which would be huge.
v

Václav Blažej

06/08/2022, 7:43 PM
For not being a theorist the question seems to be built successfully convoluted enough. Can you pose the proposition more clearly?
👍 1
j

Jason Morris

06/08/2022, 7:54 PM
Sure. If you ask your software "what are all the fact patterns (inputs) that could support this output", and it tells you, can you use that list of possible valid fact patterns to do pattern matching between other inputs and the valid fact patterns, getting the same result as if you had asked the original program whether the inputs were valid.
Most systems cannot answer that first question, but the one I'm using can, and I'm wondering if that is effectively a means to compile a logic program and a query into more efficient code for answering only that specific query.
a

Alex Cruise

06/09/2022, 4:06 PM
Stumbled on this, somewhat relevant https://arxiv.org/pdf/2003.07438.pdf
message has been deleted
w

wtaysom

06/10/2022, 3:32 AM
My parser failed on your original question and on the rephrasing, but there's probably something interesting here. So on the rephrasing, it's the second half. How can there be "other inputs" not included in the list of "all inputs"? (I mean there is more than one way to try to interpret "other" here.) What follows doesn't "the same result as if you had asked the original program whether the inputs were valid" help. Here "the" is ambiguous since we're talking about all the ones that led to a given output and other inputs too.
j

Jason Morris

06/10/2022, 3:51 AM
Fair. I neglected to mention that the answers from the first question are abstract, expressed as constraints. For example, if your code is the rules of Rock Paper Scissors, your first abstract query might be "how can someone win a two player game", and the abstract answers are in the form " the first player throws rock, and the second player throws scissors, and the first player wins". That would be one of the three possible abstract models that answer the hypothetical query. Now I know William and Jason played a game, and I know William threw rock. If what I want to know is who won a game, I have two options. I can run the same query again against the rules, with the concrete facts I have, and ask it to presume only that Jason threw something. Or, I can attempt to unify against the abstract models returned by the first query. Either way, the same two results will be returned. The one where Jason threw scissors, and the one where Jason threw paper. The complexity of the first query is exponential with regard to the complexity of the rules and the number of hypothetical inputs. The complexity of the second query is linear with the number of models I got from the first one, and exponential with the number of hypothesized inputs only. All the complexity that comes from the rules is avoided. I think. Maybe. 😂
c

Chris Knott

06/10/2022, 8:35 AM
I think the complexity is just pushed into the number of models, so while the second query is linear in respect of this, I think this is still exponential with regards to the original parameters (number of unknown inputs).
☝️ 1
w

wtaysom

06/10/2022, 9:54 AM
In general there’s no free lunch, but in general you aren’t in the general case. The advantage of talking about the “models” like you have here is that matching against a given one is easy. The disadvantage is that most programs, even not very interesting programs, are associated with a large number. So you’re better off using a more compact formalism. Probabilistic graphical models come to mind. Often there’s a sweet spot where rules in the code are relatively simple and the data structures encode irregularity without being too big. I shared a good example the other day. Suppose you have a grid of squares and each square can be part of a path. So seven labels: ┗, ━, ┓, ┛, ┃, ┏, and blank. You want to enumerate all closed loops. You can simplify the problem by increasing the grid size to just represent horizontal edges, vertical edges, and intersections. Lastly, the Yoneda Lemma formalizes the idea that you can characterize a thing by its relationships to similar things. In practice you can use it to help figure out the right category of similar things or the right features to characterize or both simultaneously.
j

Jason Morris

06/13/2022, 6:06 PM
So what I hear you saying is that it might work, but may not be computationally advantageous. Which I will test, but for my particular sort of problem, the search trees tend to be deep and wide, and the method makes both very expensive. The alternative would have less width and near-zero depth. I'm all but certain it would be faster. I'm just not certain it is semantically identical for all queries... But I suppose I will find out eventually. Thanks, both.
2 Views