• 0 Posts
  • 217 Comments
Joined vor 3 Jahren
cake
Cake day: 9. Juni 2023

help-circle
  • I don’t understand why this works, but it does

    What was happening before was this: Git received your commits and ran the shell script. It directed the script’s output stream back to Git so that it could relay it back over the connection to display on your local terminal with remote: stuck in front of it. Backgrounding the npx command was working, the shell was quitting without waiting for npx to finish. However, Git wasn’t waiting for the shell script to finish, it was waiting for the output stream to close, and npx was still writing to it. You backgrounded the task but didn’t give npx a new place to write its output to, so it kept using the same output stream as the shell.

    Running it via bash -c means “don’t run this under this bash shell, start a new one and have it just run this one command rather than waiting for a human to type a command into it.”

    The & inside the quote is doing what you expect, telling the subshell to background this task. As before, it’ll quit once the command is running, as you told it not to wait.

    The last bit is &> /dev/null which tells your original, first shell that you want this command to write somewhere else instead. Specifically, the special file /dev/null, which, like basically everything else in /dev/, is not really a file, it’s special kernel magic. That one’s magic trick is that when you write to it, everything works as expected except all the data is just thrown away. Great for things like this that you don’t want to keep.

    So, the reason this works is you’re redirecting the npx output elsewhere, it just goes into a black hole where no one is actually waiting for it. The subshell isn’t waiting for the command to finish, so it quits almost immediately. And then the top level shell moves on after the subshell has finished.

    I don’t think the subshell is necessary, if you do &> /dev/null & I think it’d be the same effect. But spawning a useless shell for a split second happens all the time anyway, probably not worth worrying about too much.


  • chaos@beehaw.orgto196@lemmy.blahaj.zoneBASIC rule
    link
    fedilink
    English
    arrow-up
    6
    ·
    vor 11 Tagen
    explaining the joke

    The top one is fairly normal, the bottom one is the same program, still in C, but redone to be as BASIC-like as possible, complete with line number labels and using goto instead of any of the normal control flow structures like while and for loops.



  • In Haskell, that’s “unit” or the empty tuple. It’s basically an object with no contents, behavior, or particular meaning, useful for representing “nothing”. It’s a solid thing that is never a surprise, unlike undefined or other languages’ nulls, which are holes in the language or errors waiting to happen.

    You might argue that it’s a value and not a function, but Haskell doesn’t really differentiate the two anyway:

    value :: String
    value = "I'm always this string!"
    
    funkyFunc :: String -> String
    funkyFunc name = "Rock on, "++name++", rock on!"
    

    Is value a value, or is it a function that takes no arguments? There’s not really a difference, Haskell handles them both the same way: by lazily replacing anything matching the pattern on the left side of the equation with the right side of the equation at runtime.


  • chaos@beehaw.orgtoMemes@lemmy.mlWhere is the lie?
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    vor 23 Tagen

    The far right loves a strong man, and by definition there can be only one of those, prefers when “the natural order” is followed, and thinks the ends always justify the means. That keeps them pretty cohesive with the establishment right, who are making buckets of money under the system as it is now and are okay with just about anything else as long as that doesn’t change. When they fight, it’s because the far right is trying to do something stupid enough that the establishment thinks it risks their money or power, or the establishment is holding the far right back from fully implementing their “natural order” worldview, but there’s a lot of overlap where both can be happy, because the establishment really has no morals at all and are happy to use the far right to gain power if all they have to do is throw them some red meat every once in a while.

    The left’s a very different story. On the far left, people are very principled, to the point where compromise or partial wins feel hollow because the only real win would be the entire principle being adopted en masse. It makes it harder to work together, because even groups with the same goals can get frustrated by the way the other one is doing it, or because the other group is going to keep going while the other wants to stop sooner. And the establishment left has a fair amount in common with the establishment right, they find the right’s goals uncouth and mean, but they do still fundamentally believe in capitalism and don’t want to upend the system. That leaves a lot less common ground and a lot more infighting overall.





  • Well, I don’t really need to for what I’m saying, which is that I don’t see any reason a computer is fundamentally incapable of doing whatever it is that humans do to consider ourselves conscious. Practically incapable, maybe, but not by the nature of what it is. Define it how you like, I don’t see why a computer couldn’t pull the same trick in the distant future.

    Personally, though, I define it as something that exhibits awareness of a separation between itself and the rest of the world, with a large fuzzy spectrum between “has it” and “doesn’t have it”. Rocks exhibit no awareness whatsoever, they’re just things with no independence. Plants interact with the world but don’t really seem like they’re doing much more than reacting, which doesn’t demonstrate much, if any. Animals do a lot more, and the argument that at least some are conscious is quite plausible. An LLM kinda, sorta, momentarily glances off of this sometimes, enough that you can at least ask the question, but the fact that outside of training it is literally just an unchanging matrix of numbers and that its only “view” into the world is what you choose to give it means that it can’t be aware of itself versus the world, it’s at best aware of itself versus its training set and/or context window, a tiny keyhole to a photograph of something in the world, means it makes a barely discernible blip on the scale, on the level of plants, and even that might be generous. An artificial consciousness would, in my opinion, need to be independent, self-modifying, and show some of the same traits and behaviors that we see in animals up to and including ourselves with regard to interacting with the rest of the world as a self-contained being.


  • I think you’re reading some arguments I’m not making. The author seems to be of the opinion that even with infinite resources it’s outright impossible to have a computer that thinks or experiences consciousness, which is obviously a philosophical, not practical, argument, and I don’t agree. I’m not saying we should actually try it, or that it’s doable with our current or foreseeable resources.

    That being said, I am defining it. I’m saying that even if we assume that it’s utterly impossible to have consciousness any other way, it’s some incredibly unique combination of the things that make us human and literally any deviation whatsoever makes it all fall apart, there still seems to be a possible path to a computer with consciousness via simulation of that particular and special process. That’s my thing you don’t think I can define: a physics simulation of sufficient fidelity to simulate a thing we already know demonstrates consciousness. “There aren’t enough resources in the universe” would block that path, sure, but that’s not very interesting. Lots of other things could block that path, it’s an insane path to an incredibly difficult goal. But saying “artificial consciousness is impossible, period” means something is blocking it, and the idea that it’s some law of physics that is both crucial to consciousness and can’t be simulated is interesting. I’m struggling to imagine how that would be possible, and if it’s a failure of my imagination I’d like to know. The universe doesn’t have to be computable or deterministic to make a simulation that imitates it so well that observations of the real and simulated physics yield indistinguishable results, and at that point I don’t see anything inherently preventing consciousness.


  • (I’m going to say “you” in this response even though you’re stating some of these as arguments from the author and not yourself, so feel free to take this as a response to the author and not you personally if you’re playing devil’s advocate and don’t actually think some of these things.)

    You’re missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.

    But it does take place in the real world. Where do you think the computers are going to be? Computers can and do exist in and interact with the real world, they always have, so that box is already checked. You can imagine the computations as happening in a sort of mathematical void outside of the universe, but that’s mostly only useful for reasoning about a system. After you do all that, you move electrons around in a box and see the effects with your own human senses.

    The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

    Well, yeah, current LLMs are tiny and stupid. Something bigger, and probably not an LLM at all, might not be.

    Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation). Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

    It doesn’t have to actually fit reality perfectly, and it doesn’t have to be able to predict reality like a grand unified theory would. It just needs to behave similarly enough to produce the same effects that brains do. It hasn’t been shown to be possible, but there’s also no reason to think we can never get close enough to reproduce it.

    Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.

    Yes it does. If they’re indistinguishable, there is no difference.

    Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.

    I don’t have any experience writing physics simulators myself, but they definitely exist. Even as a toy example, the iOS app Dice by PCalc does its die rolls by simulating a tossed die in 3D space instead of a random number generator. (Naturally, the parameters of the throw are generated, the simulation is just for fun, but again, it’s a distinction without a difference. If the results have the same properties, the mechanism doesn’t matter.) If I give you a billion random numbers, do you think you could tell if I used the app or a real die? Even if you could, would using one versus the other be the difference between a physics simulation being accurate or inaccurate enough to produce consciousness?

    certainly untractable with current physics model/technology.

    Of course. This is addressing an argument made by the post that computers are inherently incapable of intelligence or consciousness, even assuming sufficient computation power, storage space, and knowledge of physics and neurology. And I don’t even think that you need to simulate a brain to produce mechanical consciousness, I think there would be other, more efficient means well before we get to that point, but sufficiently detailed simulation is something we have no reason to think is impossible.

    Human intelligence has a lot of externalities and cannot be reduced to pure “functional objects”.

    Why not? And even if so, what’s stopping you from bringing in the externalities as well?

    If it’s just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you’ll agree those aren’t intelligent.

    What are the rules of the filing system? If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.


  • One needn’t go as far as souls anyway. Jefferson’s hypothesis—that there is some electrochemical basis to thought—is sufficient to solve the problem. Were it true, the reason computers seem fundamentally blocked from progress on the Turing Test would amount to the fact that they are wholly mechanical objects, while “thought” is as much a biological function as “digestion” or “copulation.”

    Even if true, why couldn’t the electrochemical processes be simulated too? I don’t think it’s necessary to strictly and completely reproduce a biological brain to produce thought in a computer, but even if it is, it’s “just” a matter of scale. If you can increase the fidelity of the simulation with effectively infinite computing power, what would it be missing? It’d have to be something that can’t be predicted, can’t even have its unpredictability described with an equation (I don’t know what any coin flip will turn up as, but I do know how to write a program that produces indistinguishable results from a real coin for a simulation), so it’s just changing all the time and follows no rules whatsoever, but also you can’t just write a program that does its own “random crap that can’t be predicted” simulation because the real one is somehow also so precise that it’s the only thing that makes consciousness work and a mechanical one isn’t good enough?



  • I got the .net and .org of my last name, and offered $50 to the owner of the .com as he wasn’t doing anything with it. Kind of a lowball, admittedly, but I would’ve gone up to a hundred or two. Instead, he told me it was worth thousands, which, lol, but then he didn’t renew it, which I only found out because a random third person reached out to me as the owner of the .net offering me the .com. Turns out they hadn’t actually bought it yet, though, so instead I scooped it up and now I’ve got the trifecta!






  • chaos@beehaw.orgtoMemes@lemmy.mlBurgerland would never lie /s
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    vor 2 Monaten

    I’d believe that people are living happy, fulfilling lives there, sure, people usually find a way to do that regardless of their situation. But I’m pretty sure it’s not just propaganda that the same damn family has been in charge for the better part of a century, and that alone is enough for me to conclude that it is a fundamentally broken system that, even if it somehow isn’t as repressive and evil as it’s portrayed, will get there eventually.