Does this A in 18A stand for ångström? Can they even produce anything below 10 nm?
Does this A in 18A stand for ångström? Can they even produce anything below 10 nm?
My phone came with usb c to a adapter. Yeah, keyboards only need very little power.
Why Canada?
A Linux client is not even on the roadmap?
I was thinking a wired one that you already own?
In this case deportation does not sound so bad …
Yes I think you are right. And I think this is borderline a mental illness if you can’t stop lashing out. As I understand it, she somehow thinks by bashing trans women she is doing something good for women. Trans women are somehow taking away her womanhood or something like that. I have read something like this several times from Rowling but I have no clue how trans woman could do that. But Rowling is obsessed with that, for what ever reason.
This is mental illness by now! Seriously wtf? Why is this so important for her that she can’t stop talking about it? If I had some irrational hate for trans woman, I would not go on about it in public all the time.
Don’t we have more important problems then to bash people that are so unhappy with their body that they are willing to take hormones and let people operate on their genitals?
This is such a simple thought, everybody should be able to think it, right? But on the other hand, she is not the only one hating transgender women or men. I mean it is not right to hate people for that. But if I would hate trans people then I would just not invite them for dinner and would stop talking about them all the time.
It must be some form of mental illness I have no other explanation.
LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.
Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.
Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I’m saying.
Yes, that is true. The last 10-20% are usually the hardest. I think LLM’s only become slightly better with each generation at first. My prediction is, there will be another big step forward towards AGI when these models can learn from interacting with themself. And this also might result in a potentially dangerous AGI.
Well get a concept of how physics work (balancing in your example) only by being trained with (random?) still images is a lot to ask imo. But these picture generating NN can produce “original” pictures. They can draw a spider riding a bike. Might not look very good but it is no easy task. LLM’s aren’t very smart, compared to a human. But they have a huge amount of knowledge stored in them that they can access and also combine to a degree.
Yes well today’s LLM’s would not produce anything if they talk to each other. They can’t learn persistently from any interaction. But if they will become able to in the future, that is where I think it will go in the direction of AGI.
Well LLMs don’t learn from any interaction at the moment. They are trained and after that, one can interact with them but they don’t learn anymore. You can fine tune the model with recorded interactions later, but they do not learn directly. So what I am saying is, if this is changed and they keep learning from interactions, as we do, there will be a break through. I don’t understand why you are saying Thant’s not how it works when I am clearly talking about how it might work in the future.
I also don’t understand why you get upvoted for this and I get down voted just for posting my thoughts about LLMs. To be clear, it is totally fine to disagree with my thoughts but why down vote it?
Well, our natural languages are developed over thousands of years. They are really good! We can use them to express our self’s and we can use them to express the most complicated things humans are working on. Our natural languages are not holding us back! Or maybe the better take is, if the language is not sufficient we do expand them how it is necessary! We develop new special words and meaning for a special subjects. We developed math to express and work with laws of nature in a very compact way efficient way.
Understanding and working with language is the key to AGI.
Yes, big NN use a lot of power at the moment. Funny example is, when DeepMinds AlphaZero-Go engine beat one of the best human player. The human mind operates on something like 40W or so while AlphaZero-Go needed something like a thousand times of that. And the human even won a few games with his 40W :)
And yes you are right, AI systems learn very inefficient compared to a human brain. They need a lot more data/examples to learn from. When the AlphaZero chess engine learned by playing against itself, it played billions of chess matches in a few days. So a lot more a human can play in its lifetime.
Well of course there is a lot of hype around it. And it probably is over hyped at the moment. But there will be the next breakthrough in AI/LLMs. I don’t know when, but I think it will be when AIs learns by interacting with other AIs.
Well, me as a human, yes! We all constantly have an inner dialog that helps us to solve problems. And LLMs could do this as well. It is in principle not so much different from playing chess against yourself. As far as I know, these chess NN are playing against older versions of themself to learn. So it doesn’t have to play against the exact copy of itself.
Some of the training of image generators is done by two different AIs. AI-1 learns to differentiate between generated and real images and AI-2 tries to trick AI-1 by generate images that AI-1 can’t differentiate from real images. They both train each other! And the result is that AI-2 can create images that are very close to real images. All without any human interaction. But they do need real images as training data.
Komplett abschaffen und die steuerlichen Mehreinnahmen als Kindergeld auszahlen!
Why was that so funny? I mean it was very funny but I don’t know why ;)