I’m already hosting pihole, but i know there’s so much great stuff out there! I want to find some useful things that I can get my hands on. Thanks!
Edit: Thanks all! I’ve got a lil homelab setup going now with Pihole, Jellyfin, Paperless ngx, Yacht and YT-DL. Going to be looking into it more tomorrow, this is so much fun!
Docker I can’t wrap my head around. I keep trying to spend a night and sit down and play around with it. But I hit a block, get distracted and never get anywhere.
Use chatgpt to help you keep going, it’s very helpful
edit: Thought I’d expand on this more. Treat ChatGPT like a fellow engineer who never gets annoyed at answering your questions, and will never tell you that you’re dumb (haha). Tell it what yo’ure trying to do, copy paste your commands into it, copy paste the error messages if you have any. Literally, inundate it with questions and info and it’ll help you understand what you’re doing and help you unblock yourself. It’s a great tool.
Don’t know why you got downvoted. Chatgpt has helped me too wrap my head around programming/scripting - in my case jinja2 in home assistant.
It might not always be correct, but it helped me getting started!
I think the thing that’s really stopping me from using that is that every time I get curious and go poking around to see what the fuss is, I run into some sort of paywall situation, or maybe it’s just a long queue that you need to join to get access, something like that. All I know is that you can’t just casually fire it up and take it for a spin.
Either I’m finding the wrong thing, or the people who already swear by it paid some fee or got an early access code ages ago. It also doesn’t know when it’s lying, and already got a lawyer in trouble for trying to let ChatGPT do his job, apparently it slapped together a brief, an argument before the court, that referenced a bunch of case law that didn’t actually exist.
No matter what, it’s not so casually accessible as people make it out to be, I don’t know what’s up with that.
I’m assuming you’re referring to ChatGPT not being casually accessible. If you’re signed up on a free account, you get access to GPT-3.5 which is pretty decent. If you pay the $20 a month you get access to GPT-4 which is even better, and I prefer to use this - but the free model is also fine for learning podman/docker.
Sign up, if you gotta be on the waitlist, get on it. You can also use Google Bard or Microsoft’s Bing chat AI as well. The MS Bing one is GPT-4 backed. Either way, they will help you learn stuff. Don’t be discouraged, push through and embrace these awesome generative AI tools, they unlock superpowers for you :)
Try phind.com, it’s free for now, and uses gpt-3.5 unlimited or gpt-4 limited to 25 requests per… 4 hours I think. Never ran out. It’s specialised for devs. So far the output is the best of bing AI, and Copilot chat that I’m testing.
I had that at the start for a day or two but then it stopped. I a chatgpt open in a tab on my phone. I haven’t had any issue asking questions in months
This has been my go to lately. I have whole conversations with chatgpt. I would love to see who reviews the logs for chatgpt to hear some of the crazy, sad, weird and awesome things they find
To understand it you’ll need to know roughly what an OS is. Very roughly speaking an OS provides a program with a way to access files, connect to the internet and launch other programs.
What docker does is make something a bit like a ‘virtual’ OS with its own filesystem, network and task manager, and then start running programs in it (which then may launch other programs).
Since you’re not making a VM which must simulate all of the hardware, this is a lot cheaper. However since a docker container gets its own filesystem, network etc. it can do whatever it wants without any other programs getting in the way.
Among other things docker containers make installation a lot easier since a program will only ever see its own files (unless you explicitly add your own files to the docker container). To a large extent you also don’t need to worry about installing any prerequisites, since those can just be put into the container.
Making a docker container is a bit (a lot) like installing a fresh OS, just putting the stuff you need in it and then copying the whole OS whenever you want to run the thing again. Except it’s been optimized such that it takes about as much effort as launching a program, as opposed to a VM which needs dedicated resources and are generally slower than the machine that hosts them.
I think the more intuitive model (to me) is instead of thinking of it as a lightweight virtual machine, or a neatly packaged up OS, is to instead think of it as a process shipped with an environment. That environment includes things like files and other executables (like apt), but in of itself doesn’t constitute an OS. It doesn’t have its own filesystems, drivers, or anything like that. By default it doesn’t run an init system like systemd either, nor does it run any other applications other than the process you execute in the environment.
That’s fair, but is that environment any different from just a virtual OS? I mean it doesn’t have its own filesystem and drivers etc, but that’s precisely because they’ve been made virtual.
In this context I’d say systemd is an application, not the OS, though the distinction gets iffy I know.
Because I associate an OS with more then just an environment. It often has several running apps for instance, often a GUI or shell (which many containers don’t have), are concerned about some form of hardware (virtual or physical), and just… Do more.
Containers by contrast are just a view into your filesystem, and some isolation from the rest of the environment through concepts like cgroups. All the integrations with the container host are a lot simpler (and accurate) to think of as just simply removing layers of isolation, rather then thinking of it like its own VM or OS. Capabilities just fit the model a lot better.
I agree the line is iffy since many OS’s leave out a few things of the above, like RTOS’s for MCUs, but I just don’t think it’s worth thinking of a container like its own OS considering how different it is from a “normal” Linux based OS or VM.
Does this mean Docker instances are large in size? I haven’t used it either but I’ve been meaning to get into it. If I can use stuff like nodemon in it, it’d be great.
The images can get big, but they’re fairly clever about it so it is manageable. Performance wise they don’t take up more CPU and RAM than a regular application.
There’s an (unofficial) image running nodemon on dockerhub about 250MB in size. The official NodeJS image is about 300MB (presumably they’ve preinstalled a bunch of stuff). You could start with the official image and install nodemon on it, that would probably be most future proof (no way of knowing if the unofficial image keeps getting updates, if any).
The important concepts aren’t that complicated.
Instead of nesting a computer (VM’s) the operating system makes the program think it’s on its own dedicated computer (isolated file system space, cpu, and memory shares). A Dockerfile is just a basic script to construct one of these computers by commands and files.
The real reason people get excited is because they can ship a Docker “image”. It’s a layered filesystem which really is just like saying there’s a system tracking who puts what files in what place and so it’s easier to just send the whole setup to someone then try to document how you should set all that stuff up to run their software.
This is “dummier” proof than the pre-existing convention of just using a package manager to do this for you.
Using compose makes it a hundred times easier to understand and interact with. Also using Linux.
Instead of learning docker itself, look for a project you really like that offers a docker-based install and deploy that. It’ll usually take less than 20 lines of terminal commands. It’s a good start that gets you to feel the usefulness of docker, and the basics that you’ll need for most deploys.
Honestly, Docket with a GUI just becomes super easy I actually got a bit spoiled. Used to use Kitematic, which even had a browser, but that’s gone. I’m back to terminal only now, but alas.
Docker is much easier than it seems, imagine a single app with all it’s dependencies all the way down to the os level being all wrapped up in a virtual filesystem so it can’t see anything else. Only the kernel is shared.
So if “Awesome Webapp Jeroboam” needs a different version of python than you have installed and and old version of ffmpeg for some utility it needs, along with the apache webserver where you prefer nginx, no problem, all that mess gets wrappped up in a container and you don’t have to worry about it.