<https://twobithistory.org/2018/05/27/semantic-web...
# thinking-together
m
https://twobithistory.org/2018/05/27/semantic-web.html
Copy code
The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn?
😄 1
Copy code
Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of "exhaustive, reliable" metadata would be wonderful, he argued, but such a world was "a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities."3 Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them "semweb hucksters") were overlooking.4 The essay, titled "Metacrap," identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks.
Copy code
In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these "Talmudic debates" were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were "uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality." The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as has been discussed on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand.
k
The semantic Web has found some uses in specific domains, such as scientific data annotation, e.g. in bioinformatics. But even if those domains with a good motivation for semantic markup, making people agree on ontologies is a hard problem.
d
I believe schema.org is still going..
m
it's mentioned in the article, that's the json/pragmatic approach after the original w3c effort didn't work
d
I should just read the article in future!
😉 1
w
I say no as one who liked the semantic web, OWL, and all. Still I don't think the Semantic Web was ever a contender for maintaining decentralization. Instead, I refer you to Doctorow's recent writings on adversarial interoperability. https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
👍🏽 2
❤️ 3
z
Wow fantastic article!
y
Good to see a history on this, I've wanted to read one for a while! I see parallels here to other human psycho-technological constructs that haven't panned out well, like logical conlanguages and classical AI. In the case of logical conlanguages, there's the social problem of people not agreeing on semantics and ontologies, and a tension between specific aesthetic notions of elegance and simplicity that the creators have, coming up against practical and realistic needs of a language speaker, and the psychology they already have. Seems like there may have been similar problems with the semantic web movement.
e
Ted Nelson has many insightful talks about how the internet went down the wrong path. He focuses on the way hyperlinks were made to be unidirectional which doomed the web to its current very sub-optimal state.
b
Seems like Ben Thompson’s aggregation theory is one force at play here. When there were many search engines, each one had an incentive to outcompete the others by serving users well — both protecting users from fraud/spam/viruses/porn, and helping users find what they needed most usefully. But eventually, one of these aggregators would emerge as more effective and popular than the others, and take over as the runaway leader, and then the incentives change… so now I can’t just browse the web, get these obnoxious amp urls that I need to manually de-googlify if I want to use the actual source url. Better monopoly-breaking federal policies are one possible solution to this, though of course those are a clumsy cudgel and often wielded as just another extension of the plutocracy 😞 https://stratechery.com/aggregation-theory/