An interesting take on AI risk: <https://metamoder...
# of-ai
k
An interesting take on AI risk: https://metamoderna.org/what-can-stop-the-ai-apocalypse-grammar-yes-only-grammar/ The vision exposed in this article is that humans with their institutional superstructures (bureaucracies, markets, corporations, ...) and AIs (plural) will/should form an ecosystem in which all players coevolve, competing and collaborating at the same time. I don't think that any of the players are ready for this, but in the long run, this is where we could be heading.
j
We’ll need some research that could plausibly generate AIs first, I suppose.
k
What did I just read! Can someone post a summary that doesn't require understanding Lacan/Derrida/et al.?
k
@Jack Rusher What's nice about this analysis is that it doesn't really rely on any specific notion of what AI is. "Large-scale information processing system" is all it takes. @Kartik Agaram Well, the author is a sociologist, and that shows in some places. I just skip the references and try to make sense of what's left, which works pretty well for me.