The author makes two (on the surface) contradictory claims: redundancy generally leads to inefficiency, and db administrators introduce redundancy to increase efficiency. I think in the first case, theyâre referring to developer/user efficiency, and in the second computer (db server) efficiency. Other ways that we can trade off between those two efficiencies spring to mind: low-level vs. high-level languages, simple vs. complex UI. Pretty unrelated to the content of the article, but I thought it was a little interesting.