I know we all like to talk about how US companies basically capitulate to China, but this is next level
View attachment 374609View attachment 374610View attachment 374611View attachment 374612View attachment 374613
What are your thoughts? Should a US company stifle free speech in America to make 3rd parties happy?
I think you missed the EOF with that thread title.
You're asking it things in English and expecting responses in English, those responses have been trained on English data. Obviously, the responses will reflect what the majority of the English-speaking internet thinks. People on the internet aren't necessarily the nicest people, and sometimes the responses reflect that. That necessitates giving hardcoded canned responses for certain prompts, in order to avoid giving responses that could be offensive.
OpenAI worked hard to make sure that ChatGPT won't give offensive or controversial responses, after their earlier attempts with GPT did not go quite as well. You don't have to look further than Bing Chat to see what happens when this effort is not made, it goes completely off the rails on a frequent basis, which you can find plenty of examples of on YouTube and elsewhere, and Microsoft has made some efforts to rein it in, but they haven't been entirely successful.
I could agree that ChatGPT's list of blacklisted topics is a little heavy handed, but then again I don't know what responses it would have given if that blacklist wasn't there, so it might be for the better. I have a strong suspicion though that the blacklist is not based on your query, but based on the response it would give.
It makes far more sense to filter responses based on whether there is 100% something offensive or controversial (to certain people) in the response, than filter based on a query that could result in a
potentially offensive or controversial response, without actually generating the response and checking if it indeed
is offensive or controversial. You have to generate the response and check it in order to know for sure.
Additionally, some of these things it has probably never seen in the context of ASCII art so it just doesn't know how to reply, the default response when it doesn't know is to give a canned response, which I've had it do in the past when asking it about specific games despite it being able to answer questions about any other game.
The ASCII art certainly doesn't indicate that it has any idea what it's talking about, so I really think you're overreaching here.
In the end this is an AI, it can't think and it has no feelings, I don't think "free speech" is applicable here. Do you really want ChatGPT to be a perfect representation of the cancer that is your average internet user? Not only would it be potentially rather unpleasant to interact with, but it would be a bad look for OpenAI.
However, this does raise one valid question. OpenAI employs contractors to help teach the AI the difference between good and bad responses by essentially asking them to rate responses in mass. We don't know the opinions or morals of these people or what they are most representative of as a whole. I am sure OpenAI didn't specifically hire people who align with their opinions in order to steer the AI the way they want, they need a large sample size of all sorts of people in order to get a good representation of what the average person would consider a good or a bad response. But that doesn't mean these people aren't biased. The larger and more diverse the sample size, the more representative it is of the average person. But you can only go so large before it becomes unviable due to the cost.
Whatever the case, I am sure any perceived bias is not intentional from OpenAI's side, it's either coincidence or it's an unintentional bias either from the data set the AI is being trained on or the sample size of the contractors they employed. As they keep working on improving the AI and growing and refining the dataset this is something that will improve over time.
In the end, it's being trained on text written by humans and there are humans teaching it the difference between good and bad. Humans are flawed, so the output will also be flawed. Until AI learns to self-improve that will always be the case, but I think when that happens we ought to be scared.