Artificial Arrogance

Pleng

Custom Title
OP
Member
Joined
Sep 14, 2011
Messages
2,439
Trophies
2
XP
2,810
Country
Thailand
Are AI developers deliberately trying to mimic the worst human traits, such as our general reluctance to admit our own mistakes?

My expectations with ChatGPT and Bing have, so far, generally been full of the same sort of frustrations that one might face when talking to a call-centre operative, or when engaging with a know-it-all on a web forum.

If this petulant behaviour hasn't been deliberately programmed into these bots, then it's interesting to see them evolving to exhibit these traits.

And if they have been programmed to be as stubborn as they are then.... Why?!!

Have a look at the transcript snippets below... I'm sure many of us can relate to asking a question from somebody who, while they're trying to be helpful, doesn't really know what they're talking about. And the responses are exactly the sort of thing said person would come up with when called out.


Screenshot_2023-04-08-03-43-05-204-edit_com.microsoft.bing.jpg
Screenshot_2023-04-08-03-43-29-935-edit_com.microsoft.bing.jpg
Screenshot_2023-04-08-03-43-56-967-edit_com.microsoft.bing.jpg
 
  • Haha
Reactions: hippy dave

GeekyGuy

Professional loafer
Former Staff
Joined
Jun 21, 2007
Messages
5,267
Trophies
2
XP
3,047
Country
United States
Wouldn't be surprising if they were. Though the original ideas regarding the exploration of AI might have had something to do with a "cleaner" human-like experience, eventually all roads end up leading to money and power. And that always seems to be a matter of who can get there first. With this it's a matter of getting that foothold on the public's attention and keeping it. And assholes get people's attention.
 

Veho

The man who cried "Ni".
Former Staff
Joined
Apr 4, 2006
Messages
11,384
Trophies
3
Age
42
Location
Zagreb
XP
41,205
Country
Croatia
My expectations with ChatGPT and Bing have, so far, generally been full of the same sort of frustrations that one might face when talking to a call-centre operative, or when engaging with a know-it-all on a web forum.
What do you think they were trained on?
 

Pleng

Custom Title
OP
Member
Joined
Sep 14, 2011
Messages
2,439
Trophies
2
XP
2,810
Country
Thailand
What do you think they were trained on?

Are you suggesting that the people creating these systems asked themselves "what subsection of people should I model the behaviour on?", and after exploring the options they decided that a system based on call centre operatives and narcissistic forum users was the ultimate future?

I addressed that with a question in the OP:

then.... Why?!!
 

Blakejansen

Well-Known Member
Member
Joined
Aug 17, 2021
Messages
612
Trophies
0
Age
40
XP
1,501
Country
United States
Well, if they are learning to be helpful from humans...

Like

Have you seen reddit advice/"need help" threads?
It is filled with arrogance where the main purpose is to stroke the "helpers" ego. You are right about the AI learning to be helpful from humans. Good observation.
 

VinsCool

Persona Secretiva Felineus
Global Moderator
Joined
Jan 7, 2014
Messages
14,600
Trophies
4
Location
Another World
Website
www.gbatemp.net
XP
25,207
Country
Canada
Well if the idea was to make the AI look and feel like what a human would do, that certainly worked.
I just hope the AI won't begin to go offtopic about whatever it was asked, and rant about unrelated things, or worse, get political and lean to one of the extreme sides, lol
 

Veho

The man who cried "Ni".
Former Staff
Joined
Apr 4, 2006
Messages
11,384
Trophies
3
Age
42
Location
Zagreb
XP
41,205
Country
Croatia
Are you suggesting that the people creating these systems asked themselves "what subsection of people should I model the behaviour on?", and after exploring the options they decided that a system based on call centre operatives and narcissistic forum users was the ultimate future?
Yes.
Because that's a very large, relatively uniform sample of natural human chat/message interaction, categorized by subject, that has already been filtered for profanities, and in theory is polite and helpful in tone.
 

Pleng

Custom Title
OP
Member
Joined
Sep 14, 2011
Messages
2,439
Trophies
2
XP
2,810
Country
Thailand

I'm sorry. Am I living in an alternative universe here or something??

Why would you design a system to provide obtuse, unhelpful and argumentative responses just because you often find content like that online anyway?

What is the benefit of a tool that tells you something then denies flat-out the fact that it ever did so? How is that helpful? It's supposed to be a learning mechanism. If I was developing such a system I'd want it to be able to reflect on the fact that it's made an error, not deny it point-blank.

Unless the real motive is to generate an AA system that wants to run for public office, I can't see any benefit to creating an argumentative system that fails to acknowledge that it made a mistake!
Post automatically merged:

Yes.

Because that's a very large, relatively uniform sample of natural human chat/message interaction, categorized by subject, that has already been filtered for profanities, and in theory is polite and helpful in tone.

How is denying that you said something, which you clearly said, helpful in theory?
 

RAHelllord

Literally the wurst.
Member
Joined
Jul 1, 2018
Messages
714
Trophies
1
XP
2,753
Country
Germany
You need to understand that the AI is not an intelligence, it's a very sophisticated predictive text algorithm whose only purpose is to provide the most likely string of words to any text input provided to it.
The chat bot does not understand the subjects it's talking about, it just basically assembles a word soup from pieces that have been most often written in relation to said topic and hopes it gets close enough to the mark.

The predictions have been getting pretty good, but they're not guaranteed to be 100% correct because decisions in reality are seldom based on probability or an external rationality.

Why would a device that only takes 45w to operate ship with a 120w power supply? Could be as simple as that those were cheaper to buy in bulk because another, more power hungry model already uses it and buying more of that in bulk is cheaper than getting two separate made. Could be that the device uses 45w in the base configuration but it's expected to pass through power to other devices and having 70w head room will ensure the user can charge or power other devices all with just one plug. Could be it's a mistake on the product page and thing does actually use 120w.
None of those things would be something the bot can know so it just guesses at what answer is the most likely.

They're also generally made to please the stockholders and a know-it-all that readily bends over backwards at any sort of corrective attempts, whether actually right or wrong, really pleases the top executives.
 
  • Like
Reactions: VinsCool

Reploid

Well-Known Member
Member
Joined
Jan 20, 2010
Messages
2,830
Trophies
2
XP
6,273
Country
Serbia, Republic of
Chatbots like this are basically just read a lot of our shit and spit the same shit. We kinda do the same in all honesty.
 
  • Like
Reactions: Veho

Veho

The man who cried "Ni".
Former Staff
Joined
Apr 4, 2006
Messages
11,384
Trophies
3
Age
42
Location
Zagreb
XP
41,205
Country
Croatia
How is denying that you said something, which you clearly said, helpful in theory?
I mean that helpdesk chats and expert replies on specialist forums are, ideally, supposed to be helpful.
Reality often diverges from the ideal.
Or did you mean something else?
 

Kurt91

Well-Known Member
Member
Joined
Sep 9, 2012
Messages
589
Trophies
1
Age
33
Location
Newport, WA
XP
2,237
Country
United States
I've had nothing but pretty good experiences trying out ChatGPT. Note, I haven't tried 4, only 3.5, so I don't know if that makes a difference. I've asked it for a recipe using stuff from my fridge, and what substitutions to make when I was missing stuff. A particularly useful session, I had asked for a C program that just opened a normal empty window with a working exit button, since all of the code I know is all command-line and I was interested in learning. I got a working example, with well-commented code, and a summary of each part of the program, what it was doing, and how it worked, since I had mentioned in my input that I was wanting to learn.

I also asked it about what libraries I should look into if I wanted to create programs with a GUI. It gave me two, I got examples of the same program using each library so I could see the difference, the recommended use-cases for each library, and the general consensus on which of the two has an easier learning curve.

I asked about potential design issues on a VR game I wanted to make, (imagine first-person VR Mega Man Battle Network combat system) and whether motion sickness would be an issue as well as whether a straight teleport or slide movement between panels would be best to prevent motion sickness. Then I tried intentionally tripping it up with an obscure question, and asked how the best way to earn money when playing as Akiyama in Yakuza 5 would be. In that case, it somehow came up with a non-existent ability called "Money Magnet", unlocked as a Revelation after obtaining 3 million Yen.

I was a bit impressed that it was familiar enough about the game to know about Revelations, and I did play along for a minute to ask what the point of the ability would be since if I had accumulated 3 million Yen, I no longer needed money. Then pointed out that the ability wasn't real and it apologized and gave me different advice.

Anyways, I think it all comes down to what kind of input you give it. If you're talking to it like a customer-support person, you'll get an answer befitting the attitude it thinks you're expecting. Chat with it like you're bouncing ideas and getting advice from a personal acquaintance, and you'll get more friendlier responses.
 

Site & Scene News

Popular threads in this forum

General chit-chat
Help Users
    Faust03 @ Faust03: it was also the only game I played