Apparently there are several narratives in regards to AI girlfriends.
- Incels use AI girlfriends given that they can do whatever they desire.
- Forums observing incel spaces agree that incels should use AI girlfriends to leave real women alone
- The general public having concerns towards AI girlfriends because their users might be negatively impacted by their usage
- Incels perceiving this as a revenge fantasy because “women are jealous that they’re dating AI instead of them”
- Forums observing incel spaces unsure if the views against AI girlfriends exist in the first place due to their previous agreement
I think this is an example of miscommunication and how different groups of people have different opinions depending on what they’ve seen online. Perhaps the incel-observing forums know that many of the incels have passed the point of no return, so AI girlfriends would help them, while the general public perceive the dangers of AI girlfriends based on their impact towards a broader demographic, hence the broad disapproval of AI girlfriends.
With the worth, that’s an interesting way to look at it.
I don’t think you grasped how exponential growth works. And the opposite: logarithmic growth. It means at first it grows fast. And then slower and slower. If it’s logarithmic, it means at first you double the computing power and you get a big return… Quadruple the performance or even more… But it’ll get less quickly. At some point you’re like in your example, connecting 4 really big supercomputers, and you just get a measly 1% performance gain over one supercomputer. And then you have to invest trillions of dollars for the next 0.5%. That’d be logarithmic growth. We’re not sure where on the curve we currently are. We’ve sure seen the fast growth in the last months.
And scientists don’t really do forecasts. They make hypotheses and then they test them. And they experimentally justify it. So no, it’s not the future being guessed at. They used a clever method to measure the performance of a technological system. And we can see those real-world measurements in their paper. Why do you say the top researchers in the world aren’t “well-enough informed” individuals?
https://en.wikipedia.org/wiki/Scientific_method
No. Science isn’t done by a vote of majority. It’s the objective facts that matter. And you don’t pick experts or perspectives, that’s not scientific. It’s about objective truth. And a method to find that.
We’re now confusing science and futurology.
And I think scientists use the term “predict” and not “forecast”. There is a profound difference between a futorologist forecasting the future, and science developing a model and then extrapolating. The Scientific American article The Truth about Scientific Models you linked sums it up pretty well: “They don’t necessarily try to predict what will happen—but they can help us understand possible futures”. And: “What went wrong? Predictions are the wrong argument.”
And I’d like to point out that article is written by one of my favorite scientists and science communicators, Sabine Hossenfelder. She also has a very good YouTube channel.
So yes, what about DNA, quantum brains, Moore’s law, … what about other people claiming something. That all doesn’t change any facts.
You still misinterpret what science is about. We’ve known that human language is subjective for centuries already. That’s why we invented an additional, objective language that’s concerned with logic and truth. It’s mathematics. And that’s also why natural science relies so heavily on maths.
And no sound scientist ever claimed that string theory is true. It was a candidate for a theory to explain everything. But it’s never been proven.
And which one is it, do you question objective reality? If so I’m automatically right, because that’s what I subjectively believe.
I think at this point you two are just arguing materialism vs idealism which are two opposing philosophical approaches to science. Quite off-topic to AI companionship, if you ask me. Then again both also have their own interpretation of AI companions. Materialism would argue the human being a machine that is similar to predictive text but more complex, but would also argue that AI chatbot aren’t real. Whereas in idealism, AI personas are real; your AI girlfriend is your girlfriend, AI chatbots are alive, etc. Of course, that’s an oversimplification, but that’s the gist of where materialism vs idealism lies.
Hmmh. Thanks. Yeah I think we got a bit off track, here… 😉
I kinda dislike when arguments end in “is there objective reality”. That’s kinda the last thing to remove any basis to converse on, at least when talking about actual things or facts.
deleted by creator
deleted by creator
deleted by creator