KubeRoot

  • 0 Posts
  • 400 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle

  • Git exposes a lot of internals through odd commands, so I suspect you could manage synchronization by sending changes over email or something.

    Bonus fun fact: there’s a git bundle command that “dumps” the repository into a single file, that can be interacted with as a remote. So if you’re ever working with a local repository and want to put it on a server over ssh or something like that, you can just create a bundle, scp it over, and clone from that on the server.


  • Fundamentally, the repository you have on GitHub is the same thing as the repository you have on your computer when you clone it. Pulling and pushing are shorthands for synchronizing commits between the two repositories, but you could also synchronize them directly with somebody else who cloned the repository. As somebody mentioned, you can also just host the same repository on two servers, and push to both of them.

    The issue is that git doesn’t include convenient features like issues, pull requests, CI, wikis, etc., and by extensions, those aren’t included in your local repository, so if GitHub takes them down, you don’t have a copy.

    An extra fun fact is that git can be considered a blockchain. It’s a distributed ledger of immutable commits, each one representing a change in state relative to the previous one. Everybody who clones a repository gets a copy of its entire history and fast forwards through the changes to calculate the current state.




  • KubeRoottoComedy Heaven@lemmy.worldCaught by wife
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    Yeah, I have no idea what the best procedure would be, but I think the only things to worry about are foot fungus (no idea if there’s any risk, but your feet are very much touching the shower floor) and rinsing the soap if you drop it when showering? Like, you wash your ass in that shower, it’s not like it’s a clean clean environment anyways.





  • KubeRoottoMicroblog Memes@lemmy.worldNotepad
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    Not if you don’t use windows, or if you want a more modern looking and less busy interface, or integration with what I consider the best git GUI. I used to use N++ long ago, but after trying ST I realized it just feels clunky.


  • KubeRoottolinuxmemes@lemmy.worldFreedom
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 days ago

    Eh, I’ve previously fucked up my bootloader, all you need to do to fix it is boot up a live image, mount your root partition, arch-chroot into it, then follow normal steps to set the bootloader back up - it’s not scary if you know what you’re doing, just time-consuming



  • Also, why would you want such a machine anyways?

    People seem to be assuming that… But no, it’s not that I want it, it’s that, as far as I can tell, there’s no going back. The first iterations of the technology are here, and it’s only going to progress from here. The whole thing might flop, our models might turn out useless in the long run, but people will continue developing things and improving it. It doesn’t matter what I want, somebody is gonna do that.

    I know neurons in neural networks aren’t like real neurons, don’t worry, though it’s also not literally just “holds a number from 0 to 1”, that’s oversimplifying a bit - it is inspired by actual neurons, in the way that they have a lot of connections that are tweaked bit by bit to match patterns. No idea if we might need a more advanced fundamental model to build on soon, but so far they’re already doing incredible things.

    That is the reason why I hate the term “AI”.

    I don’t quite share the hatred, but I agree. The meaning stretches all the way to NPC behavior in games. Not long ago things like neural network face and text recognition were exciting “AI”, but now that’s been dropped and the word has new meanings.

    Yeah… you know not every problem is compute-able right?

    Yup, but that applies to our brains same as it does for computers. We can’t know if a program will halt any more than a computer can - we just have good heuristics based on understanding of code. This isn’t a problem of computer design or fuzzy logic or something, it’s a universal mathematical incomputability, so I don’t think it matters here.

    In this sense, anything that a human can think up could be reproduced by a computer, since if we can compute it, so could a program.

    At that point we might as well be talking about Unicorns

    Sure, we absolutely could talk about unicorns, and could make unicorns, if we ignore the whole whimsical magical side they tend to have in stories 😛

    I don’t think anything I’m saying is far off in the realm of science fiction, I feel like we don’t need anything unrealistic, like new superconductors or amazing power supplies, just time to refine the hardware and software on par with current technology. It’s scary, but I do hope either the law catches up before things progress too far or, frankly, a major breakthrough doesn’t happen in my lifetime.

    Edit: Right, I also didn’t fit that in my reply - thanks for being civil, some people seem to go straight to mocking me for believing things they made up because I’m not sitting in the bandwagon of “it’ll never happen”, it’s pretty depressing how the discourse is divided into complete extremes


  • Tell me you don’t understand how generative AI works.

    Current generation generative AI is mapping patterns in images to tokens in text description, creating a model that reproduces those patterns given different combinations of input tokens. I don’t know the finer details of how the actual models are structured… But it doesn’t really matter, because if human brains can create something, there’s nothing stopping a sufficiently advanced computer and program from recreating the same process.

    We’re not there, not by a long shot, but if we continue developing more computational power, it seems inevitable that we will reach that point one day.




  • The tech you speak of, that will surpass humans, does not exist. You are making up a Sci-Fi fantasy and acting like it is real.

    The difference is, this isn’t a warp drive or a hologram, relying on physical principles that straight up don’t exist. This is a matter of coding a good enough neuron simulation, running it on a powerful enough computer, with a brain scan we would somehow have to get - and I feel like the brain scan is the part that is farthest off from reality.

    You are advocating for the creation of synthetic slaves…

    That’s an unnecessary insult - I’m not advocating for that, I’m stating it’s theoretically possible according to our knowledge, and would be an example of a computer surpassing a human in art creation. Whether the simulation is a person with rights or not would be a hell of a discussion indeed.

    I do also want to clarify that I’m not claiming the current model architectures will scale to that, or that it will happen within my lifetime. It just seems ridiculous for people to claim that “AI will never be better than a human”, because that’s a ridiculous claim to have about what is, to our current understanding, just a computation problem.

    And if humans, with our evolved fleshy brains that do all kinds of other things can make art, it’s ridiculous to claim that a specially designed powerful computation unit cannot surpass that.


  • As long as progress continues and humanity survives, computer generated art will eventually outperform humans. It’s pretty obvious, as far as science knows you could just simulate a full human consciousness and pull images out of that somehow, but able to run that in parallel, never deteriorating, never tiring. It’s not a matter of if “AI” can outperform humans, it’s a matter of if humanity will survive to see that and how long it might take.


  • KubeRoottoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    I’m not sure if this is what you mean, but I do want to clarify - the drivers in the repository are still proprietary drivers from Nvidia, just tested and packaged by the distribution maintainers, dkms is just some magic that lets them work with arbitrary kernels with minimal compilation. Unless you’re using nouveau, which I don’t think is ready for most uses.


  • KubeRoottoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 days ago

    I’d definitely recommend against using drivers downloaded from a website, on general principles.

    custom kernels don’t work with the drivers from apt

    Check if there’s a dkms version - I know that’s the way it’s set up on Arch, if using a non-standard kernel you install the kernel headers, and dkms lets you build just the module for your kernel.