Branching out from the discussions about the curre...
# thinking-together
g
Branching out from the discussions about the current state of AI agents vs. how they were imagined in the text from the recent podcast episode... I was glad to hear our hosts were on a similar thought-track when discussing the "send this draft to the rest of the group and let me know when they've read it" example regarding how programmers tend to parse the last bit as "let me know when the metrics indicate they scrolled to the end" (etc) whereas a human agent would parse it as "and follow up with them after a reasonable time to ask if they've read it or not." One thing that struck me was the number of times I've told a white lie in that kind of scenario. Yes, I read it! (No, I haven't, but I just pulled it up when you asked and I'm skimming it now before the meeting starts). Dishonesty with the metrics-mindset takes a different form. In simple forms, it's like terms of service pages which gate the "agree" button behind a scroll value... You scroll to the end to lie. But if the designers of those metrics were somehow able to perfect them (eye tracking? a quiz at the end?) then an AI agent could force a more accurate/truthful answer from you. All that to get to my point... Do we have a right to lie about this stuff? How important is it for us to be able to present less-than-truthful representations to other people even if the medium is via an AI agent? If we don't include this concern in our designs, are we facilitating a world of micro-surveillance of coworkers and friends?
a
Absolutely. An app that automatically sends read receipts is often wrong about whether I’ve read something. That’s a lie. If my AI is going to lie, I’d rather it be on my behalf. I don’t think I can even consider it my AI unless its lies are in my interest.
i
Totally agree with Tom — we have the right to lie. Feels relevant: https://jeffreymoro.com/blog/2020-02-13-against-cop-shit/
“When pressed for specifics on how they managed, officers tended to dodge the issue with statements such as, ‘You gotta make priorities, we met the intent, or we got creative,’ ” the report said. “Eventually words and phrases such as ‘hand waving, fudging, massaging, or checking the box’ would surface to sugarcoat the hard reality that, in order to satisfy compliance with the surfeit of directed requirements from above, officers resort to evasion and deception.”
And speaking of agents, I don't think this is exactly what I was looking for, but I don't have a good Googleable query to get to a source. So I asked agents. Bard made up "The Culture of Fudging in the U.S. Armed Forces" by David Vine. (That was enough to get me to something real.) To follow up, I asked Chat GPT-3.5. It gave generic advice on how to approach looking up the essay. Also on Bard, there's a "View other drafts" dropdown that comes up with James Kitfield and James Mann as other people who might write this sort of thing. Not helpful. However, some have noticed that you can use variations like this to gauge the strength of believe in a LLM.
e
this sort of invites a (dangerous?) conversation about the divide and similarities between a
lie
and a
falsehood
— wherein one sort of denotes a moral imperative and the other one doesn’t
typically in programming we seem to be keen to make systems that cannot produce falsehoods (this isn’t a statement about if we do or don’t, mind you) — but when we communicate it is sort of against the grain of cultural norms (at least those I’ve been exposed to) to lie
…with maybe a caveat or digression now to ask about exaggeration?
is it that a system that reports read-status creates a scenario wherein folks are “forced” to lie — I marked the thing as read but didn’t actually read it — or is it that a system that reports read-status diminishes privacy, but couches that behind a false interaction?
the label may say “read” but it is all about time on device, baaaaaaybeeeeee
g
Read receipts is a good example but maybe a bit too empirical. I agree that programmers love "real data" and mostly intend to create systems that claim to operate on that level. Makes me think about all the fuss about tracking usage vs. asking the user if they liked the product.
Imagine if you asked the AI, "do my friends like me?" lol. And it went and asked their agents and their agents read their group chats or correlated social post engagement and sentiment analysis and said "probably not." Versus asking your friends and giving them the chance to: 1) politely say yes, and, 2) realize you're feeling insecure and potentially be more compassionate to you
Maybe the whole point of my original post was I'm concerned that engineers are going to crunch these social problems into antisocial solutions.
Which feels like a danger now particularly because AI chatbots present a freer, social interface which is a whole different context to a UI, which feels tool-like.
e
Revisiting this!
Do we have a right to lie about this stuff? How important is it for us to be able to present less-than-truthful representations to other people even if the medium is via an AI agent?
I've been thinking about systems that provide us the right to lie -- so far, the keenest example I think I've found is in games. Many games are built around at least some aspect of hidden information, and many games empower (invite?) players to lie about the current game state to other players. With a vague gesture, are there ways to construct systems that may lie where that lying is permissible, or made explicit?
a
I lie in email, Slack, status messages, filenames, code comments, wishlists… It could be the way I’m primed by this thread, but I feel like honesty-enforcing systems are exceptional. I can edit a file’s mtime to be older than its actual modification time! In my generation, people get pretty emotionally invested in the mechanisms we use to enforce immutable truth, like cryptography and DRM and the little switch that made floppies read-only. I wonder if that’s different for younger people since those controls are so much more commonplace now? Except floppies.