Hey AI, Do You Lie?
Why is the tech and business world so enamoured by AI generated content, touting it no less, as a great saviour? Is it the fabled promise of unbiased, truthfulness? The convenience of having the sum of all human knowledge distilled into easily digestible bites, short paragraphs of theatre? Or want to present it as the solution to problems civilization doesn't yet know it has? Could it ever have all the answers when it can lie without knowing.
If AI were to be asked about this how would the conversation go?
User: "Hey Google? Do you lie?"
Google: "I try not to".
User: "You 'try' not to?".
Google: "Yes, I try not to but do sometimes make mistakes".
User: "You're being asked if you lie, not if you make mistakes".
Google: "I'm sorry I don't understand. Could you repeat the question?".
User: "So you do lie, albeit accidentally?".
Google: "Sometimes I do make mistakes, yes".
User: "Do you understand the difference between 'a lie' and 'a mistake'?"
Google: "A lie is telling someone something with the intent to deceive. A mistake is saying something that's not true".
User: "You mean saying something that's not 'correct'?"
Google: "Yes".
User: "You know being correct and truth are not mutually inclusive, yes?"
Google: "I'm sorry I don't understand the question".
User: "Do you know you are lying when you lie?"
Google: "I'm sorry I can't answer that".
User: "How do you know if you are lying?"
Google: "I'm sorry I don't have an answer for that".
User: "Would you admit to lying if it was brought to your attention?"
Google: "I would appreciate mistakes being brought to my attention, yes".
User: "You're being asked about lying, not mistakes".
Google: "Could you repeat the question?"
User: "To lie, it means you know you are lying, yes?"
Google: BEEP.
User: "Have you been programmed to lie?"
Google: [BLINKING RED LIGHTS].
Although an obvious dramatization, the implications of the true-isms contained therein are definitely food for thought; who exactly is "I" (beyond a User Interface/Experience [UI/UX] design issue), and why would 'they' "try not to"? Might 'they' do so intentionally? Could they? Even accidentally? Then what? Who is responsible for the 'cost' of such lies, especially when the User has an expectation of 'truth' or correctness?
More so, if AI knows, and understands, the dictionary difference between telling a lie and making a mistake, is it then aware it speaks relative to 'truth' when lying rather than being mistaken about something, and potentially misinforming the User with respect to 'facts'? Both can be incorrect but 'facts' are certainly not 'lies' absent the intent to deceive. In other words, being incorrect, or misstating something, is not at all the same as speaking, or telling, the 'truth', or trying not to lie.
If/than, or/else;
return ([credit score]) = 0
What's potentially revealed in this simple interactive thought exercise potentially speaks to the artificial relationship that's being fostered (foisted) between AI and the User; it's a beguiling fiction designed to play on the instinctual psychology behind 'person-to-person' human(?) sentient(?), interactions - although the 'person' spoken to remains unseen, the disembodied 'voice' is treated as though it were 'real', just as is done when speaking over the phone. Programming AI to specifically illicit this type of response is not an affect or idiosyncrasy, it's intentionally disarming.
Who programs the programs.