niva

  • 2 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle















  • Yes I think you are right. And I think this is borderline a mental illness if you can’t stop lashing out. As I understand it, she somehow thinks by bashing trans women she is doing something good for women. Trans women are somehow taking away her womanhood or something like that. I have read something like this several times from Rowling but I have no clue how trans woman could do that. But Rowling is obsessed with that, for what ever reason.


  • This is mental illness by now! Seriously wtf? Why is this so important for her that she can’t stop talking about it? If I had some irrational hate for trans woman, I would not go on about it in public all the time.

    Don’t we have more important problems then to bash people that are so unhappy with their body that they are willing to take hormones and let people operate on their genitals?

    This is such a simple thought, everybody should be able to think it, right? But on the other hand, she is not the only one hating transgender women or men. I mean it is not right to hate people for that. But if I would hate trans people then I would just not invite them for dinner and would stop talking about them all the time.

    It must be some form of mental illness I have no other explanation.


  • LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.

    Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.

    Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I’m saying.



  • Well get a concept of how physics work (balancing in your example) only by being trained with (random?) still images is a lot to ask imo. But these picture generating NN can produce “original” pictures. They can draw a spider riding a bike. Might not look very good but it is no easy task. LLM’s aren’t very smart, compared to a human. But they have a huge amount of knowledge stored in them that they can access and also combine to a degree.

    Yes well today’s LLM’s would not produce anything if they talk to each other. They can’t learn persistently from any interaction. But if they will become able to in the future, that is where I think it will go in the direction of AGI.