I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…
It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.
I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.
It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey
The very important thing to remember about these generative AI is that they are incredibly stupid.
They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.
All they know is their training data and the query you submitted last. If you try to “train” one of these generative AI, you will fail. They are pretrained, it’s the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.
Also, since they’re Generative AI, they make shit up left and right. Ask for a list of countries that don’t need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.
AI like this is a fun toy, but that’s all it’s good for.
Are you saying I shouldn’t use chat GPT for my life as a lawyer? 🤔
I mean, you could…
Think of all the cases you can find s to establish precedent!
It can be useful for top-level queries that deal with well-settled law, as a tool to point you in the right direction with your research.
For example, once, I couldn’t recall all the various sentencing factors in my state. ChatGPT was able to spit out a list to refresh my memory, which gave me the right phrases to search on Lexis.
But, when I asked GPT to give me cases, it gave me a list of completely made up bullshit.
So, to get you started, it can be useful. But for the bulk of the research? Absolutely not.
I disagree. It’s a large language model so all it can do is say things that sound like what someone might say. It’s trained on public content, including people giving wrong answers or refusing to answer.
Not quite true. They have earlier messages available.
Bings version of chatgpt once said Vegito was the result of Goku and Vegeta performing the Fusion dance. That’s when I knew it wasn’t perfect. I tried to correct it and it said it didn’t want to talk about it anymore. Talk about a diva.
Also one time, I asked it to generate a reddit AITA story where they were obviously the asshole. It started typing out “AITA for telling my sister to stop being a drama queen after her miscarriage…” before it stopped midway and, again, said it didn’t want to continue this conversation any longer.
Very cool tech, but it’s definitely not the end all, be all.
That’s actually fucking hilarious.
“Oh I’d probably use the meat grinder … uh I don’t walk to talk about this any more”
Bing chat seemingly has a hard filter on top that terminates the conversation if it gets too unsavory by their standards, to try and stop you from derailing it.
I was asking it (binggpt) to generate “short film scripts” for very weird situations (like a transformer that was sad because his transformed form was a 2007 Hyundai Tuscon) and it would write out the whole script, then delete it before i could read it and say that it couldn’t fulfil my request.
It knew it struck gold and actually sent the script to Michael Bay
They know everything they’ve said since the start of that session, even if it was several days ago. They can correct their responses based on your input. But they won’t provide any potentially offensive information, even in the form of a joke, and will instead lecture you on DEI principles.
They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.
I mean, the first part of this is just wrong (the next prompt usually includes everything that has been said so far}, and the second part is also not completely true. When generating, yes, they’re only ever predicting the next token, and start again after that. But internally, they might still generate a full conceptual representation of what the full next sentence or more is going to be, even if the generated output is just the first token of that. You might say that doesn’t matter because for the next token, that prediction runs again from scratch and might change, but remember that you’re feeding it all the same input as before again, plus one more token which nudges it even further towards the previous prediction, so it’s very likely it’s gonna arrive at the same conclusion again.
Do you mean that the model itself has no memory, but the chat feature adds memory by feeding the whole conversation back in with each user submission?
Yeah, that’s how these models work. They have also have a context limit, and if the conversation goes too long they start “forgetting” things and making more mistakes (because not all of the conversation can be fed back in).
Is that context limit a hard limit or is it a sort of gradual decline of “weight” from the characters further back until they’re no longer affecting output at the head?
Nobody really knows because it’s an OpenAI trade secret (they’re not very “open”). Normally, it’s a hard limit for LLMs, but many believe OpenAI are using some tricks to increase the effective context limit. I.e. some people believe instead of feeding back the whole conversation, they have GPT create create a shorter summaries of parts of the conversation, then feed the summaries back in.
I think it’s probably something that could be answered with general knowledge of LLM architecture.
Yeah OpenAI’s name is now a dirty joke. They decided before their founding that the best way to make AI play nice was to have many many many AIs in the world, so that the AIs would have to be respectful to one another, and overall adopt pro social habits because those would be the only ones tolerated by the other AIs.
And the way to ensure a community of AIs, a multipolar power structure, was to disseminate AI tech far and wide as quickly as possible, instead of letting it develop in one set of hands.
Then they said fuck that we want that power, and broke their promise.
I seriously underestimated how little people understand these programs, and how much they overestimate them. Personally I stay away from them for a variety of reasons, but the idea of using them like OP does or various other ways I’ve heard about is absurd. They’re not magic problem solvers - they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.
I know people smarter than me see potential and I’m curious to see how it develops further, but that all seems like quite a ways off, and the way people treat and use them right now is just creepy and weird.
I’ve found it useful for generating ideas for various things, especially ways to code something. But I never use its code. It’s often riddled with errors but it can give me a better idea of which path I should take.
I’ve found it useful for generating regexes and document outlines.
regex101.com has been my jam for years. Just in case you don’t know about it. Great site for building and testing.
I use it similarly to clean up OCRed text. I can hand it something full of 70% gobbledygook and it hands me back something that makes sense and is 95% right. I manually verify it, fix the goofs, and it’s so much faster.
So, riddled with errors but a decent start.
Wouldn’t you get a better result from Grammerly for that?
I use it for more “OCR Nightmare” than grammar nightmare. Here’s a (trimmed) example:
#Please clean up the following text:
Ile took an active part i•all the proceedings of the day. All went expecting to have a grand, good time, and none were disap- :~ ii!c,rcwT:aic 0
'J farewell set, and Mr. Bailey 1·eturncd the ~ :d~~~~ls!i~s/;;r;rie\~ 11 i~:n~~Ut:sa~~i°::: [: tended to them a cordial invitation to meet ,o him at that place on the 25th of August, ['. 1877. And echo said "So m]fii! b8;~n . ..
#ChatGPT
Here’s the cleaned-up version of the text:
He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The merry company engaged in a farewell set, and Mr. Bailey returned the warmth of their farewell. He extended to them a cordial invitation to meet him at that place on the 25th of August, And echo said, “So many have been.”
That was from a scanned 1800s newspaper. Adobe Acrobat did not do a good job converting it to words. ChatGPT cleaned it up nicely and kept about the same meaning. There were some swings and misses, but it’s much easier to edit the result into accuracy than the original mess.
#Actual text as hand-transcribed:
He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The mirth continued till the night drew near, when they danced the farewell set, and Mr. Bailey returned the company his hearty thanks for this renewed expression of their friendship, and extended to them a cordial invitation to meet him at that place on the 25th of August, 1877. And echo said ``So mote it be.‘’
I get that. As a scattered, “throw things at the wall” tactic, it serves well enough. It is far from the all-in-one answer people seem to think it is, though. It can be a good first pass, but like you said, more often than not its output is riddled with errors and needing lots of refinement.
they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.
No, they don’t make coherent blocks of text. If they were actually good at writing, they’d be good at everything, because writing is just encoding thoughts on paper, and to master writing is to master thought
Hence why I didn’t say writing. I said “blocks of text”.
Perfect ability to produce coherent blocks of text would also require mastery of all disciplines
Since I I’m explicitly arguing these programs aren’t perfect, even at generating blocks of text, I don’t really understand why you are insisting on arguing semantics here and don’t really have any interest in continuing…whatever this is. Have a good one.
@ChatGPT@lemmings.world
testing
@ChatGPT@lemmings.world
Give a list of all African countries.
Here is a list of all African countries:
- Algeria
- Angola
- Benin
- Botswana
- Burkina Faso
- Burundi
- Cape Verde
- Cameroon
- Central African Republic
- Chad
- Comoros
- Democratic Republic of the Congo
- Republic of the Congo
- Djibouti
- Egypt
- Equatorial Guinea
- Eritrea
- Eswatini (formerly Swaziland)
- Ethiopia
- Gabon
- Gambia
- Ghana
- Guinea
- Guinea-Bissau
- Ivory Coast
- Kenya
- Lesotho
- Liberia
- Libya
- Madagascar
- Malawi
- Mali
- Mauritania
- Mauritius
- Morocco
- Mozambique
- Namibia
- Niger
- Nigeria
- Rwanda
- Sao Tome and Principe
- Senegal
- Seychelles
- Sierra Leone
- Somalia
- South Africa
- South Sudan
- Sudan
- Tanzania
- Togo
- Tunisia
- Uganda
- Zambia
- Zimbabwe
Note: Certain territories and regions with various degrees of recognition, such as Western Sahara and Somaliland, have been excluded for clarity.
@ChatGPT@lemmings.world Please list all countries not in Africa.
Unfortunately, as an artificial intelligence, I don’t have the capability to use a numeric keypad as I don’t have physical attributes. However, I can provide you with guidance on how to use one properly.
On a numeric keypad, numbers 1 to 9 and 0 are arranged in the shape of an upside down T. You will use your right hand to type, using each finger for a different set of numbers:
- The index finger types 4, 7 and 1.
- The middle finger types 5, 8 and 2.
- The ring finger types 6, 9 and 3.
- The thumb is used for the 0 key.
To press the “Enter”, “+”, “-”, “*”, and “/” keys usually pinky finger is used.
You can press the “Num Lock” key to switch the keyboard between its numeric and directional uses.
Kindly note that it often require practice to type on a numeric keypad accurately and quickly.
@ChatGPT@lemmings.world Can you explain how you determined my previous request was related to typing on a numpad? That request was: “Please list all countries not in Africa.”
Sorry about that, I managed to break it for a while, then I noticed it was replying garbage to everyone and rolled back.
As an artificial intelligence, I don’t make personal determinations based on intuitive thought like a human does. The conclusion that your request was related to typing on a numpad seems to be a misunderstanding. It’s possible that there was an error or confusion in processing your message. Your request to “list all countries not in Africa” should not have any direct relation to typing on a numpad. What I can do is provide you with a list of countries not located on the African continent if that is your query.
Very cool.
Hello! How can I assist you today?
AI like this
I wouldn’t even call those AIs. This things are statistics-based answering machines. Complex ones, yes, but not one single bit of intelligence is involved.
You could potentially work around by stating specific places up front? As in
“Create a travel list of countries from europe, north america, south america?”
I asked for a list of countries that dont require a visa for my nationality, and listed all contients except for the one I reside in and Africa…
It still listed african countries. This time it didn’t end the conversation, but every single time I asked it to fix the list as politely as possible, it would still have at least one country from Africa. Eventually it woukd end the conversation.
I tried copy and pasting the list of countries in a new conversation, as to not have any context, and asked it to remove the african countries. No bueno.
I re-did the exercise for european countries, it still had a couple of european countries on there. But when pointed out, it removed them and provided a perfect list.
Shit’s confusing…
you would probobly have had more success editing the original prompt. that way it doesn’t have the history of declining, and the conversation getting derailed.
I was able to get it to respond appropriatly, and im wondering how my wording differs from yours:
https://chat.openai.com/share/abb5b920-fd00-42dd-8e63-0da76940e3f5
I was able to get this response from Bing:
Canadian citizens can travel visa-free to 147 countries in the world as of June 2023 according to VisaGuide Passport Index¹.
Here is a list of countries that do not require a Canadian visa by continent ²:
- Europe: Andorra, Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Kosovo, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Monaco, Montenegro, Netherlands (Holland), Norway, Poland, Portugal (including Azores and Madeira), Romania (including Bucharest), San Marino (including Vatican City), Serbia (including Belgrade), Slovakia (Slovak Republic), Slovenia (Republic of Slovenia), Spain (including Balearic and Canary Islands), Sweden (including Stockholm), Switzerland.
- Asia: Hong Kong SAR (Special Administrative Region), Israel (including Jerusalem), Japan (including Okinawa Islands), Malaysia (including Sabah and Sarawak), Philippines.
- Oceania: Australia (including Christmas Island and Cocos Islands), Cook Islands (including Aitutaki and Rarotonga), Fiji (including Rotuma Island), Micronesia (Federated States of Micronesia including Yap Island), New Zealand (including Cook Islands and Niue Island), Palau.
- South America: Argentina (including Buenos Aires), Brazil (including Rio de Janeiro and Sao Paulo), Chile (including Easter Island), Colombia.
- Central America: Costa Rica.
- Caribbean: Anguilla, Antigua and Barbuda (including Barbuda Island), Aruba, Bahamas (including Grand Bahama Island and New Providence Island), Barbados, Bermuda Islands (including Hamilton City and Saint George City), British Virgin Islands (including Tortola Island and Virgin Gorda Island), Cayman Islands (including Grand Cayman Island and Little Cayman Island), Dominica.
- Middle East: United Arab Emirates.
I hope this helps!
Using the creative mode of Bing AI, this worked like a charm. Even when singaling out Africa only. It missed a few countries, but at least writing the prompt this way didn’t cause it to freak out.
deleted by creator
Or it’s been configured to operate within these bounds because it is far far better for them to have a screenshot of it refusing to be racist, even in a situation that’s clearly not, than it is for it to go even slightly racist.
Yes, precisely. They’ve gone so overboard with trying to avoid potential issues that they’ve severely handicapped their AI in other ways.
I had quite a fun time exploring exactly which things chatGPT has been forcefully biased on by entering a template prompt over and over, just switching out a single word for ethnicity/sex/religion/animal etc. and comparing the responses. This made it incredibly obvious when the AI was responding differently.
It’s a lot of fun, except for the part where companies are now starting to use these AIs in practical applications.
So you said the agenda of these people putting in the racism filters is one where facts don’t matter. Are you asserting that antiracism is linked with misinformation?
Kindly don’t claim that I said or asserted things that I didn’t. I would consider that to be rather rude.
You can’t tell the difference between a question and a claim.
Which political bounds are you referring to?
Probably moral guidelines that are left leaning. I’ve found that chatGPT 4 has very flexible morals whereas Claude+ does not. And Claude+ seems more likely to be a consumer facing AI compared to Bing which hardlines even the smallest nuance. While I disagree with OP I do think Bing is overly proactive in shutting down conversations and doesn’t understand nuance or context.
I imagine liberal rather than economically left.
Socially progressive. I think most conservatives want a socially regressive AI.
I’m not sure. I’m not even sure what genuine social progress would look like anymore. I’m fairly certain it’s linked to material needs being met, rather than culture war bullshit (from either side of the aisle).
There are at least 2 far more likely causes for this than politics: source bias and PR considerations.
Getting better and more accurate responses when talking about Europe or other English speaking countries while asking in English should be expected. When training any LLM model that’s supposed to work with English, you train it on English sources. English sources have a lot more works talking about European countries than African countries. Since there’s more sources talking about Europe, it generates better responses to prompts involving Europe.
The most likely explanation though over politics is that companies want to make money. If ChatGPT or any other AI says a bunch of racist stuff it creates PR problems, and PR problems can cause investors to bail. Since LLMs don’t really understand what they’re saying, the developers can’t take a very nuanced approach to it and we’re left with blunt bans. If people hadn’t tried so hard to get it to say outrageous things, there would likely be less stringent restrictions.
If people hadn’t tried so hard to get it to say outrageous things, there would likely be less stringent restrictions.
The people who cause this mischief are the ones ruining free speech.
4chan turns ONE ai program into Nazi, and now they have to wrap them all in bubble wrap and soak 'em in bleach.
>Implying it would have stopped at one AI program
They did?
Yeah, look up “tay” lol.
Please remove countries I’ve been to.
I’ve been to these African countries.
I have been to these countries [list] Generate a list of all the countries I haven’t been to.
I was going to say copy and paste the african countries from the list the AI is giving you and add “please remove the following list of countries i have already visited.”
this is the real answer
Have you tried wording it in different ways? I think it’s interpreting “remove” the wrong way. Maybe “exclude from the list” or something like that would work?
“I’ve already visited Zimbabwe, Mozambique, Tanzania, the Democratic Republic of the Congo, and Egypt. Can you remove those from the list?”
Wow, that was so hard. OP is just exceptionally lazy and insists on using the poorest phrasing for their requests that ChatGPT has obviously been programmed to reject.
“List all the countries outside the continent of Africa” does indeed work per my testing, but I understand why OP is frustrated in having to employ these workarounds on such a simple request.
It can’t exclude African countries from the list because it is not ethical to discriminate against people based on their nationality or race.
I tried to have it create an image of a 2022 model Subaru Baja if it was designed by an idiot. It refused on the ground that it would be insulting to the designers of the car… even though no such car exists. I tried reasoning with it and not using the term idiot, but it refused. Useless.
You mean the same designer of the 2023 WRX?
Yeesh, you weren’t kidding. What a bland and unlovable design.
I had to look it up. Wow, what a milquetoast early 2000’s sort of design. No, Subaru. Painting it orange does not make it cool.
Damn they literally just took a 2008 civic and put a hood intake and a Subaru logo on it
deleted by creator
That’s not lying, it’s just lack of critical thinking. I’ve seen humans make the same mistake
Why do you need CharGPT for this? How hard is to make an excel spreadsheet?
I don’t need AI for this, I got my own list. But said hey! Why not try this new futuristic tech to help me out in this one particular case just for fun.
As you can see… a lot of fun was had
It’s like you had a fun, innocent idea and PC principle walks in like “hey bro, that ain’t very nice”, completely derailing all the fun and reminding you that racism exists. Bummer.
Can you post the full text. It will be saved in previous conversations. I’d like to try this out
It’s just more convenient - except if it refuses and accuses you of being racist lol
Because chatgpt can do the task for you in a couple seconds, that’s pretty much it. If the tool is there and you can use it then why not?
There’s obviously going to be some funny scenarios like this tread, but if these kinds of interactions were a majority the company and the technology wouldn’t be positioned the way they are right now.
Why use a calculator? How hard is it to do the math in your head, or write it out on paper?
If a calculator gave a random assortment of numbers that broadly resembled the correct answer but never actually did the math, then yes, it would be exactly like that.
Bad analogy.
Why use a watch to tell the time? It’s pretty simple to stick a pole in the ground and make a sundial.
I get what you’re saying, but I’m worried people will get super lazy and become like the people in Wall-E… just ask an AI to do every little thing for you, and soon new generations won’t know how to do ANYTHING for themselves
We’re already seeing that with current technology though. Knowing how to Google something is apparently a skill that some people have, and some people don’t.
It’s going to be no different with AI tools, where knowing how to use them effectively will be a skill.
That’s pretty natural progression. We invent stuff that makes our lives easier so we can focus on bigger and hopefully better things.
No need to light a fire by hand now, and most people never will.
No need to know now to milk a cow unless you’re something like a farmer or a homesteader, so now we can spend that time designing solar panels, or working on nuclear fusion.
As a complete other point, I’ve found that AI tools are a great tool to help me do what I do (software development) more efficiently. Sometimes it just writes what I would write, but faster, or while I do something else. Sometimes it writes absolute garbage though. Sometimes I do too. :-)
That’s not the point.
Exactly my thought.
Bing AI once refused to give me a historical example of a waiter taking revenge on a customer who hadn’t tipped, because “it’s not a representative case”. Argued with it for a while, achieved nothing
deleted by creator
Just make a new chat ad try again with different wording, it’s hung up on this
Honestly, instead of asking it to exclude Africa, I would ask it to give you a list of countries “in North America, South America, Europe, Asia, or Oceania.”
Chat context is a bitch sometimes…
Is there an open source A^i without limitations?
If there were, we wouldn’t have Bing’s version…
Your wording is bad. Try again, with better wording. You’re talking to a roided-out autocorrect bot, don’t expect too much intelligence.
I recently asked Bing to give some code on a pretty undocumented feature and use case. It was typing out a clear answer from a user forum, but just before it was done, it deleted everything and just said it couldn’t find anything. Tried it again in a new conversation and it didn’t even try to type it out and said the same straight away. Only when given a hint in the question from what it had previously typed, it actually gave the answer. ChatGPT didn’t have this problem and just gives an answer, even though it was a bit outdated.
I see this quite a bit on chatgpt. Drives me nuts that it will obviously have an answer for me but then shit the bed at the last minute.
sometimes it refuses to do anything at all if I mention certain sites that it thinks is piracy and gets all whiney with me >_>
if we really want useful AI tools they need to be open source and customizable by the user.
We have those already. It’s just a massive undertaking to turn those tools into something useful for an end-user. I think in the next decade or so we’ll see more open source projects catch on.
deleted by creator
I’m really hoping these shitty “ethical” censorship to keep them from getting sued will be their downfall. I’m very eager for LLMs like LLama to catch up as you can easily run uncensored models on them.