• 7 Posts
  • 1.01K Comments
Joined 3 年前
cake
Cake day: 2023年6月25日

help-circle


  • I get what you mean. I’ve suspected it’s a combination of factors:

    1. People have a name for it now. You can’t announce or be prideful about something that you don’t have a name for

    2. People are more accepting of autism now. You’d be more incentivized to hide autism if people thought it was a bad thing

    3. Autistic people tend to attract other autistic people. If you know one autistic person, you probably know a whole bunch of other autistic people too

    But also, I just think that a lot of people underestimated how many people were autistic back then. A lot of high-functioning autistic people can pass for normal until you really get to know them. For instance, I’m like 99% sure that both of my parents are high-functioning autistic, and nobody ever suspected they might be. I brought up the possibility to them and their response was just, “yeah, I figured.”


  • You know how there’s the old schoolhouse stereotype that there’s always a “weird kid” in every class? There’s a good chance that kid was an undiagnosed autist.

    The current estimates for autism rates is around 1 in 30. Which means every classroom is expected to have 1 autistic kid. Matches perfectly with the “weird kid in class” stereotype. People recognized autism since forever. That’s why the stereotype exists. It’s just that they didn’t have an actual word for it yet.


  • “explore and recombine” isn’t really the words I would use to describe generative AI. Remember that it is a deterministic algorithm, so it can’t really “explore.” I think it would be more accurate to say that it interpolates patterns from its training data.

    As for comparison to humans, you bring up an interesting point, but one that I think is somewhat oversimplified. It is true that human brains are physical systems, but just because it is physical does not mean that it is deterministic. No computer is able to come even close to modeling a mouse brain, let alone a human brain.

    And sure, you could make the argument that you could strip out all extraneous neurons from a human brain to make it deterministic. Remove all the unpredictable elements: memory neurons, mirror neurons, emotional neurons. In that case, sure - you’d probably get something similar to AI. But I think the vast majority of people would then agree that this clump of neurons is no longer a human.

    A human uses their entire lived experience to weigh a response. A human pulls from their childhood experience of being scared of monsters in order to make horror. An AI does not do this. It creates horror by interpolating between existing horror art to estimate what horror could be. You are not seeing an AI’s fear - you are seeing other people’s fears, reflected and filtered through the algorithm.

    More importantly, a human brain is plastic, meaning that it can learn and change. If a human is told that they are wrong, they will correct themselves next time. This is not what happens with an AI. The only way that an AI can “learn” is by adding on to its training data and then retraining the algorithm. It’s not really “learning,” it’s more accurate to say that you’re deleting the old model and creating a new one that holds more training data. If this were applied to humans, it would be as if you grew an entirely new brain every single time you learned something new. Sounds inefficient? That’s because it is. Why do you think AI is using up so much electricity and resources? Prompting and generating an AI doesn’t use up much resources; it’s actually the training and retraining that uses so much resources.

    To summarize: AI is a tool. It’s a pretty smart tool, but it’s a tool. It has some properties that are analogous to human brains, but lacks some properties that make it truly similar. It is in techbros’ best interests to hype up the similarities and hide the dissimilarities, because hype drives up the stock prices. That’s not to say that AI is completely useless. Just as you have said in your comment, I think it can be used to help make art, in a similar way that cameras have been used to help make art.

    But in the end, when you cede the decision-making to the AI (that is, when you rely on AI for too much of your workflow), my belief is that the product is no longer yours. How can you claim that a generated artpiece is yours if you didn’t choose to paint a little easter egg in the background? If you didn’t decide to use the color purple for this object? If you didn’t accidentally paint the lips slightly skewed? Even supposing that an AI is completely human-like, the art is still not yours, because at that point, you’re basically just commissioning an artist, and you definitely don’t own art that you’ve commissioned.

    To be clear, this is my stance on other tools as well, not just AI


  • I think there’s a bit of a misconception about what exactly AI is. Despite what techbros try to make it seem, AI is not thinking in any way. It doesn’t make decisions because it does not exist. It is not an entity. It is an algorithm.

    Specifically, it is a statistical algorithm. It is designed to associate an input to an output. When you do it to billions of input-output pairs, you can then use the power of statistics to interpolate and extrapolate, so that you can guess what the output might be, given a new input that you haven’t seen before. In other words, you can perfectly replicate any AI with a big enough sheet of paper and enough time and patience.

    That is why AI outputs can’t be considered novel. Inherently, it is just a tool that processes data. As an analogy, you haven’t generated any new data by taking the average of 5 numbers in excel - you have merely processed the existing data

    Even if a human learns from AI-generated art, their art is still art, because a human is not a deterministic algorithm.

    The problem arises when someone uses generative AI for a significant and notable portion of their workflow. At this point, this is essentially equivalent to applying a filter to someone else’s artwork and calling it new. The debate lies in that there is no clear point for when AI takes up an appropriate vs. inappropriately large portion of a person’s workflow…



  • My experience has been that you need to pay attention to what I call the “satiety-to-calorie ratio.” Some foods have really good ratios, meaning you feel full without a lot of calories. Some foods have really poor ratios, meaning you get a lot of calories but still feel hungry afterwards. Start keeping track of how full you feel after eating a meal or a snack, and also keep track of how many calories it has.

    You will be surprised by some of the results. Some of what would be considered healthy foods can have pretty poor ratios and some of what would be considered unhealthy can have pretty good ratios. Obviously, the issue here is that we’re only factoring in calories, not nutrients, so this isn’t the end-all-be-all system that you need to follow. But if weight loss is your primary objective, this is a good starting metric.

    You’ll want to cut out or minimize foods that have a low ratio, and keep foods that have a good ratio. Keep a couple of good-ratio snacks for when you crave snacks. That way, you can satisfy cravings without getting too many extra calories.

    I find that it helps me to set a daily calorie limit and aim to keep under it. Going over the limit is fine, but the extra calories get rolled over and need to be paid off over the next days. If you don’t know the calories of foods that you ate, give your best estimate. You’ll get better at estimating over time as you pay attention to the calories of the foods that you do know.

    Also, when you cook, make sure to add in the calories from oil. And if you just need a way to drop your calories rapidly, I find that Soylent or Huel have remarkably good ratios and taste like milkshake, though they are somewhat expensive.










  • Basically everything is more powerful than the steam deck. The steam deck wasn’t really designed to be powerful, moreso it’s meant to be the “reference model” for handhelds: cheapest, weakest, yet also most mainstream. My understanding is that the Z2 should be substantially more powerful than the steam deck, and though it should also use more battery than the steam deck, it also has a larger battery capacity to make up for that increased power draw. Price, on the other hand… Well, nothing can even get close to the sort of price that a steam deck offers



  • Oh, I’ve got such a good one. In high school, I was in Science Olympiad (basically a science club). I was always kind of a wishy-washy member, never really a serious or particularly reliable member. But one day, the club needed designs for a new shirt, and they decided to ask the members for some designs that the members would then vote on. I decided to submit a satire shirt.

    I obviously can’t share the full design for privacy reasons, but I went ahead and made it jingoistic/military themed. Fighter jets flying overhead, tanks rolling through. To make sure that people knew I was being totally serious about this clearly relevant shirt, I put a stick figure holding a science-looking flask in the corner of the shirt. And then to make sure that everyone knew that we were smart, I put the equation: “3+3=6.” All text in comic sans, of course.

    Anyways, no one got the joke. My design got 1 pity vote. I don’t think anyone even believed me when I said that it was a joke, which out of everything was kind of the saddest part for me.