Are there any techniques or metrics to define or m...
# thinking-together
s
Are there any techniques or metrics to define or measure how specific or generic something is? That something could be a function or class in a programming language, an algorithm, or even something less formally defined, like a component in a UI framework. I'm familiar with some measures of complexity, like counting dependencies or cyclomatic complexity, which seems somewhat related. Appreciate any pointers to papers or articles about this, even if they "just" define how something is more or less specific or generic than something else to support a hypothesis that is totally unrelated.
šŸ° 1
d
I'm not sure what "how specific or general" would actually mean for a class. If you get a really precise definition then you have probably defined how to measure it too!
It reminds me a bit of https://en.wikipedia.org/wiki/Chomsky_hierarchy - the main technique for e.g. proving something is turing powerful, is to show an isomorophism to a known system with proven properties.
šŸ‘ 1
s
In BV's ladder of abstraction, the number of parameters in functional abstraction is the measure of abstraction
d
Interesting. The types involved feel like a huge factor for measuring generalisation/specialisation e.g.
add(a : ushort, b : ushort
add(a : int, b : int)
add(a : double, b : double)
add(a : any, b : any)
add<T>(a : T, b : T)
add<T>(... list : T)
I am reading "An Introduction to General Systems Thinking" right now and it is discussing how to measure the relative generality of scientific laws. Weinberg claims that that laws should properly be defined as conditional "if... so...". He says that the fewer "if... so..." clauses, the more general the law. Applying this to code, instead of looking inside the code for it's generality, look outside at the context in which it can be used i.e. how many caveats are there about when it can or cannot be used?
šŸ‘ 1
s
Variables are also know as "degrees of freedom"
āš” 1
šŸ‘ 1
d
Number of variables/parameters doesn't seem like a good measure when we can just encode any number of variables into a goedel number.
šŸ‘ 1
s
Good point
I feel like you're onto something with the types thread...
(Can't we measure the length of the godel number?)
t
The first thing that comes to my mind is https://en.wikipedia.org/wiki/Kolmogorov_complexity but you kind of want the inverse. Given a parameterized definition, what is the size of the space it covers.
j
ya'll getting silly with this complexity theory fuss
FWIW, it seems like the best way to think about "specific vs general" isn't as a quantitative measure, but as a comparative relation. X is more general than Y if there's a nice mapping from situations where you'd use Y to situations where you could use X instead. (This is analogous to the concept of "reducing" problems in complexity theory; egg on my face for talking smack about CT 5 minutes ago.)
There's a lot of wiggle room in that definition, and rightfully so. For instance: is a Turing machine more general than ANY computational artifact? Well, yeah, kind of. Although the mapping from your function to the Turing machine will basically involve programming the function into the Turing machine, so that's not that "nice" of a mapping.
Two examples of particularly nice mappings: ā€¢ Passing the exact same parameters along, typecast into a more general type! (So
add(a : double, b : double)
is more general than
add(a : int, b : int)
.) ā€¢ Passing the exact same parameters along, with a few extra parameters! (So
join(list, delimiter)
is more general than
joinWithCommas(list)
.)
But as the mapping becomes more and more complex (writing wrapper code to adapt a library to a different style of API, etc.), it becomes less and less "nice", and it becomes less and less clear that one artifact is more general than the other.