I’ve wanted to install pihole so I can access my machines via DNS, currently I have names for my machines in my /etc/hosts files across some of my machines, but that means that I have to copy the configuration to each machine independently which is not ideal.

I’ve seen some popular options for top-level domain in local environments are *.box or *.local.

I would like to use something more original and just wanted to know what you guys use to give me some ideas.

  • ohuf@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    RFC 6762 defines the TLDs you can use safely in a local-only context:

    *.intranet
    *.internal
    *.private
    *.corp
    *.home
    *.lan

    Be a selfhosting rebel, but stick to the RFCs!

      • Diligent_Ad_9060@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        https is not a problem. But you’ll need an internal CA and distributed its certificate to your hosts’ trust store.

  • ellipsoidalellipsoid@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    “.home.arpa” for A records.

    I run my own CA and DNS, and can create vanity TLDs like: a.git, a.webmail, b.sync, etc for internal services. These are CNAMEs pointing to A records.

  • DIYiT@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I own both mydomain.com as well as mydomain.me. I use the *.me as my local domain and *.com for the real world.

  • Spare_Vermicelli@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    maybe not directly answer for you, but I just literally bought 4 domains for 3 euro per year (renews at the same price!) 5 minutes ago :D.

    The catch - it has to be 9 numbers.xyz (see https://gen.xyz/1111b for details).

  • tech_medic_five@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    lastname. systems

    I used to own lastname.cloud and foolishly let that expire. Its one of my biggest regrets.

  • Delyzr@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have a registered domain and my lan domain is “int.registereddomain.com”. This way I can use letsencrypt etc for my internal hosts (*.int.registereddomain.com via dns challenge). The actual dns for my internal domain itself is not public but static records in pihole.

    • Sir-Kerwin@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Can I ask why this is done over something like hosting your own certificate authority? I’m quite new to all this DNS stuff

      • liquoredonlife@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If you own your own domain, the lifecycle toolchain to request, renew, deliver certs around a variety of cert authorities (letsencrypt is a popular one) makes it really easy, along with not having to worry about hosting an internal CA but more importantly dealing with distributing root certs to client devices that would need to trust it.

        I’ve used https://github.com/acmesh-official/acme.sh as a one-off for updating my Synology’s https certificate (two lines - one fetch, one deploy - finishes in 20 seconds and can be cron’d to run monthly) and Caddy natively handles the entire lifecycle for me (i use cloudflare for my domain registrar which makes it both free and a snap to handle TXT challenge requests).

        Certbot is another popular one.

    • liquoredonlife@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I did something similar, though I’ve done a slight bifurcation-

      *.i.domain.tld -> the actual internal host/IP (internal dns is adguard)

      *.domain.tld all resolve internally using a DNS rewrite to a keepalived VIP that’s shared between a few hosts serving caddy that handle automatic wildcard cert renewals / SSL / reverse proxy.

      While I talk to things via *.domain.tld, a lot of my other services also talk to each other through this method - having some degree of reverse proxy HA was kinda necessary after introducing this sort of dependency.

    • Tripanafenix@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hmm I thought when I add tls internal to my reverse proxy rule for local domains, it does not get letsencrypt certs. But when I leave it out of the Caddyfile rule, it gets reachable from outside of the local network. How do I use your recommondation? Using a .home.lab domain locally with a DNS name resolve for every single local subdomain (dashboard.home.lab, grafana.home.lab, etc) right now with a caddy managing the outside and the inside reverse proxy work

    • NewDad907@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I want to do this, but I have no clue how to set it up on Asustor AS6706T. I’ve got a bunch of docker apps up and running and I’d like to simplify stuff with subdomains and better ssl. The whole self signed stuff is just a whole project in itself to work right.

    • Daniel15@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I use *.home.mydomain for publicly-accessible IPs (IPv6 addresses plus anything that I’ve port forwarded so it’s accessible externally) and *.int.mydomain for internal IPv4 addresses.

  • Asyx@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I own lastname.me and lastname.dev and everything public is lastname.me and everything local ist lastname.dev. I don’t have a VPS anymore so the .me domain is a bit useless and only relevant for emails these days but I’d have something like nc.lastname.me for my public next cloud instance and docs.lastname.dev for my paperless instance that I don’t want to have on somebody else’s machine.

  • secopsx@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I use a custom domain for everything…email, internal dns, external (cf tunnels), and my public websites. I use to use AWS Route 53 for everything because of work, but moved to CF because it’s free and much easier to setup and manage.

    For local devices I use *.local.domaingoeshere.com (wildcart cert), issued by cloudlfare. In retrospec I should have used *.int.domain.com as it would be less typing…but everything is categorized and bookmarked anyway.

    • maevian@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Why not use *.domain.com ? If you own the domain you’ll never have a conflict that way

  • certuna@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    .local is mDNS - and I’m using that, saves me so much hassle with split-horizon issues etc.

    I also use global DNS for local servers (AAAA records on my own domain), again, this eliminates split-horizon issues. Life is too short to deal with the hassle of running your own DNS server.