Channels

administrivia

announcements

bot-dark-forest

devlog-together

in-alberta

in-boston

in-germany

in-israel

in-london

in-nyc

in-ontario

in-seattle

in-sf

in-socal

introduce-yourself

linking-together

of-ai

of-end-user-programming

of-functional-programming

of-graphics

of-logic-programming

of-music

present-company

random-encounters

reading-together

share-your-work

thinking-together

two-minute-week

wormholes

Title

m

Mariano Guerra

12/23/2019, 3:12 PMIt’s important because it’s a way to conceptualize why neural networks are in a way better than classical ML algorithms. Because this lack of leaks means that anyone can play around with them without breaking the whole thing and being thrown one level down. Want to change the shape ? Sure. Want to change the activation function ? Sure. Want to add a state function to certain elements ? Go ahead. Want to add random connections between various elements ? Don’t see why not… etc.

I truly think that they are among the first of a "new" type of mathematical abstractions. They allow people that don't have the dozen+ years background of learning applied mathematics, to do applied mathematics.

a

Alex Ellis

12/23/2019, 3:19 PMinteresting article! I think I disagree with the author's point that mathematical abstractions are more leaky than CS ones. traditional "rigorous" math education (I'm thinking bachelors thru research level) encourages folks to understand the entire stack of abstractions, but that's not terribly different from a CS student learning about transistors and basic circuit design

there are many fields in math in which nearly no one understands every piece like this. e.g. you can use the "big machines" of algebraic topology without understanding all the proofs behind them (and in many cases, if you were not primarily a topologist, fully understanding would be a waste of your time). known theorems are the "API". and like many APIs, you need to well understand domain concepts (terms used in the statement of the theorem) in order to use them correctly

regarding neural networks, here's a famous blog post arguing the opposite of the above article's claim: https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b

another interesting paper if you want to explore the degree to which components NNs can be treated as abstractions: https://arxiv.org/abs/1711.10455

w

Wouter

12/24/2019, 2:38 AMSadly their "robustness" as a mathematical abstraction also makes them very black-box like

k

Konrad Hinsen

12/24/2019, 3:06 PMI wonder what the author considers an abstraction. Sums and integrals are not abstractions for me. They are important concepts. To do anything useful with integrals, you need to understand that concept. An example of a mathematical abstraction is the function, which you can use in practice without knowing much about its definition in terms of sets.

a

Alex Ellis

12/24/2019, 6:57 PMthose are closer to abstractions in the software sense — they have APIs that are much simpler than their internals, they have known semantics, etc.