Dear Daniel, i wouldn't spend too much time on pure logic. One of the famous math books of the 20th century was Whitehead & Russell's Principia Mathematica, and it used a custom symbology, which drove the printers crazy because this was early 1900's. If i recall correctly it takes them over 100 pages of complete gibberish to prove that 1 + 1 = 2. If you need to sleep that book is infallible. We all know that 1 + 1 is two but a proof of this is surprisingly difficult. And this is where the risk lies in studying problems that are invented by academics. Most of the great breakthroughs in science and math were from people trying to solve a real problem in the world, like the motions of the planets, which inspired Newton, Lagrange, or cracking the Enigma code by Turing's computers, and so many others. When people invent their own problems to solve you head dangerously into "how many angels dance on the head of a pin" territory which so plagued medieval thinkers who never looked out into the world. I remember seeing the first category theory book decades ago, and was intrigued by it, but that branch of mathematics is littered with problems that don't matter to anyone but the people getting degrees. I can't think of a language or a tool that i use on a regular basis in computers that was developed at a university since 1980's Modula-2/Oberon from my idol Prof. Wirth of ETH, or Icon from Griswold at Univ. of Arizona also around 1980.