I could research this on my own, but was interested in hearing from the community.

Software tends to fall in categories based on who has control, how it is accessed, and who owns the data.

For instance, a FOSS project hosts encrypted user data for free, and the user easily controls who accesses it, but if the server/service goes down, users lose access to everything. Or, a user has their own offline files they control 100%, but sharing is more cumbersome.

Where does git fall in this spectrum? It seems that it’s a mix, where authoritative copies may be offline at times before merging, when it returns to the hosted version. Its hosted, but can be self-hosted, and multiple copies of code canbee offline as well. Does it rely on a central source hosting, and a company willing to support the software?

I’ve never contributed to a project with version control before, though I’ve worked in a few places that used JIRA or git. It interests me how it works, and I’m just curious to read a Lemmy discussion while it’s raining where I am.

(As I prepare to press SUBMIT it occurs to me this is a FOSS question more than a Linux one. If this is a stupid post for this /r/, please report/remove or ask me to and I will.)

  • rufus
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Well the bugtracker and additional features are not inside of the git repository. So they’d get lost. But each ‘git clone’ is a complete clone of the (source code) repository including all of the history of changes, the commit messages, dates and individual changes. That’s stored on every single computer that cloned the repository and you have a copy of everything locally. Though it might be out of date if you didn’t pull the latest changes. But apart from that it’s the same data that Github stores. You could just make it available somewhere else and continue.