• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        Yeah it’s a complete fantasy. It literally takes thousands of people on Earth to keep a small crew alive on the ISS. We’re nowhere close to being able to make self sufficient colonies on another planet.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            Oh I know how fancy boston dynamics robots are, but you gotta remember that a lot of that is scripted. Boston dynamics figured out how to make a neural net that can produce really fluid movements and keep balance, but somebody still has to control the robot and tell it what to do. You also need to repair the robots, do maintenance, etc. Until we have AGI, humans are still going to be needed to do a lot of work.

            • Zerush@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              True, but this is not a situation for much longer, the advancement in AI is currently exponential, already with AI developed by AI. Until colonies are established on Mars, you can be sure that there are sufficiently developed bots and corresponding modular maintenance systems to be able to function for many years.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                10 months ago

                We don’t really know where the plateau for the current AI techniques is. A lot of what we see looks impressive, but it’s very superficial in practice. Pretty much all AI today boils down to feeding huge volumes of data into a neural network that ends up creating a compressed representation of the data, and then doing stochastic predictions based on that model. This is great for doing stuff like text or image generation, but it simply doesn’t work for any applications where there’s a specific correct result needed. What’s worse is that use of such systems to control things in the physical world is incredibly dangerous as we’re seeing with self driving cars.

                Since the neural net is simply comparing numbers together to make decisions it doesn’t have any understanding of what it’s actually doing in a human sense. It’s not able to explain the reasoning behind its decisions to a human or even guarantee to understand human instruction. And it’s not aware of its own limitations.

                In order to make an AI that can replace a human decision maker it would need to have an internal representation of the physical world that’s similar to our own. Then we would have to teach it language within the context of the world. This is how we could build an AI that can be said to understand things and that we have a shared context with allowing us to communicate in a meaningful way. People are experimenting with this stuff, but this sort of stuff is still in very early stages, and it’s not clear that techniques used for LLM models will work well for this approach.

                I’d caution to be highly skeptical regarding AI claims we’re seeing because most of these claims are made by people who have very little understanding of how this stuff actually works, and whose job is to sell this tech to the public. Pretty much none of the actual experts in the field share this optimism.

                Of course, nobody knows what the future brings and we might make some amazing breakthroughs in the coming years. However, given what we know right now, there’s little reason to expect this sort of exponential growth to continue for long. It’s also worth noting that we’ve already gone through a wave of similar hype back in the 80s where people started getting really impressive results with neural nets and symbolic logic, but scaling that turned out to be much harder than anybody anticipated.

                • Zerush@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  10 months ago

                  True, but as sid, the current status of AI has nothing to do with the one which we will have in the years of a Mars colony (2030-40?). We are not talking about bots that make their own decisions (who knows), but about bots with enough intelligence to serve as assistants to the rich inhabitants, these already exist today (look at Japanese robotics, which already use assistant robots in any field, with enough AI to accomplish the assigned task). The bots and AI that we will have in 10-15 years in any case will have no comparison with what we currently have, this is a fact.

                  • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                    link
                    fedilink
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    10 months ago

                    Right, but how much support do these bots need behind the scenes. Somebody has to do maintenance on them, somebody has to be able to decide whether they’re functioning properly, whether they have problems that need to be addressed, etc. Hence why I think there will still be need for workers. It’s just the nature of work is going to be around making sure the robots are operating smoothly. You’d probably need a relatively small workforce, but I don’t think you could eliminate it entirely by 2040.