Seems yours is different than mine, there are two things called vim. There is a Vim text editor and a Vim dishwash bar soap. For some reason Gemini thinks you want to eat the text editor which honestly is very weird and if you scroll down it talks about why it is dangerous to eat soap. By the way the results changed to soap now, seems someone noticed how weird the answer was. This proves just how stupid Gemini is, would you ever think someone eats text editers?
I’ve been able to reproduce some, like the “how to carry <insert anything here> across a river” one where it always turns it into the fox, goose and grain puzzle.
But generally on anything that’s gone viral, by the time you try to reproduce it someone has already gone in and hard-coded a fix to prevent it from giving the same stupid answer going forward.
Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.
Those LLMs can’t handle numbers, they have zero concept of what a number is. They can pull some definitions, they can sorta get very basic arithmetic to work in a limited domain based on syntax rules, but it will mess up most calculations. ChatGPT tries to work around it by recognizing the prompt is related to math, passing it to a more normal Wolfram-Alpha style algorithm, and then using the language model to format the reply into something more appealing, but even this approach often fails because if the AI gets confused for any reason it will feed moronic data to the maths algorithm.
LLMs don’t verify their output is true. Math is something where verifying its truth is easy. Ask an LLM how many Rs in strawberry and it’s plain to see if the answer is correct or not. Ask an LLM for a summary of Columbian history and it’s not as apparent. Ask an LLM for a poem about a tomato and there really isn’t a wrong answer.
Usually I’ll see something mild or something niche get wildly messed up.
I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.
Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.
The “sauce vs dressing” one worked for me when I first heard about it, but in the following days it refused to give an AI answer and now has a “reasonable” AI answer
I can literally never reproduce these. I’ve tried several times now.
Try why I eat Vim?
deleted by creator
I don’t get it …?
Seems yours is different than mine, there are two things called vim. There is a Vim text editor and a Vim dishwash bar soap. For some reason Gemini thinks you want to eat the text editor which honestly is very weird and if you scroll down it talks about why it is dangerous to eat soap. By the way the results changed to soap now, seems someone noticed how weird the answer was. This proves just how stupid Gemini is, would you ever think someone eats text editers?
I’ve been able to reproduce some, like the “how to carry <insert anything here> across a river” one where it always turns it into the fox, goose and grain puzzle.
But generally on anything that’s gone viral, by the time you try to reproduce it someone has already gone in and hard-coded a fix to prevent it from giving the same stupid answer going forward.
Because they’re fake.
I agree. People used to get so mad at me for suggesting that for some reason
That just might be fake
At one point they were packing a shitload of usb ports onto the IO panel, 5 stacks of 4 ports wouldnt surprise me
Yeah, I never get these strange AI results.
Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.
Those LLMs can’t handle numbers, they have zero concept of what a number is. They can pull some definitions, they can sorta get very basic arithmetic to work in a limited domain based on syntax rules, but it will mess up most calculations. ChatGPT tries to work around it by recognizing the prompt is related to math, passing it to a more normal Wolfram-Alpha style algorithm, and then using the language model to format the reply into something more appealing, but even this approach often fails because if the AI gets confused for any reason it will feed moronic data to the maths algorithm.
What do humans do? Does the human brain have different sections for language processing and arithmetic?
LLMs don’t verify their output is true. Math is something where verifying its truth is easy. Ask an LLM how many Rs in strawberry and it’s plain to see if the answer is correct or not. Ask an LLM for a summary of Columbian history and it’s not as apparent. Ask an LLM for a poem about a tomato and there really isn’t a wrong answer.
Usually I’ll see something mild or something niche get wildly messed up.
I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.
Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.
Meanwhile, GNU Units can do that, reliably and consistently, on a freaking 486. 😂
At least one dumb one was reproducible, I’d look for it but it was probably a few hundred comments ago
The “sauce vs dressing” one worked for me when I first heard about it, but in the following days it refused to give an AI answer and now has a “reasonable” AI answer
The original, if you haven’t seen it:
Love it :D