• 2 Posts
  • 194 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle


  • A lot of the combinations with 🦋 + rare emoji end up looking like that, just putting the rare emoji as the head and tip of wand and coloring the “humanoid with wings” body in the rare emoji’s color.

    I’m not saying it’s definitely not GenAI, but it’s also something that can easily be solved with an explicit algorithm.

    Most emoji work that way, they have a few templates and then paste the other emoji into predetermined places.


  • Other than “they’re gonna stop paying you” there’s also the risk of inflation making it so you receive way less overall, since I doubt the amount gets adjusted to match inflation.

    But yes, if the jackpot is so high that you’d get 2+mil per month, assuming you’re so worried about the dollar being worthless soon, you can still take the 2mil/mo and diversify. After a year you should already have plenty money to live comfortably for the rest of your life.







  • If you’re talking about the base image, it’s sort of real.

    The player is YouTuber Max Fosh and it was a charity football event. However the incident (as far as we know) was not scripted and he actually tried hard to get a yellow card just to be able to pull off this stunt. You could probably find the video he made on it by searching “Max Fosh yellow card”.




  • We can do that with the first sentence and flip it into German, replacing “lighter” with “fireworks”. We get:

    “Sie dürfen die Feuerarbeiten nicht mit in die Luftebene nehmen.”

    A lot of German speaking communities online do translate English loanwords into German words, often with the intention to create this funny effect.







  • Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.

    Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.