If you mean properly signed certificates (as opposed to self-signed) you’ll need a domain name, and you’ll need your LAN DNS server to resolve a made-up subdomain like lan.domain.com. With that you can get a wildcard Let’s Encrypt certificate for *.lan.domain.com and all your https://whatever.lan.domain.com URLs will work normally in any browser (for as long as you’re on the LAN).
Right, main point of my comment is that .internal is harder to use that it immediately sounds. I don’t even know how to install a new CA root into Android Firefox. Maybe there is a way to do it, but it is pretty limited compared to the desktop version.
You can’t install a root CA in Firefox for android.
You have to install the cert in android and set Firefox to use the android truststore.
You have to go in Firefox settings>about Firefox and tap the Firefox logo for a few times.
You then have a hidden menu where you can set Firefox to not use its internal trust store.
You then have to live with a permanent warning in androids quick setting that your traffic might be captured because of the root ca you installed.
This is not a new problem, .internal is just a new gimmick but people have been using .lan and whatnot for ages.
Certificates are a web-specific problem but there’s more to intranets than HTTPS. All devices on my network get a .lan name but not all of them run a web app.
You do not have to install a root CA if you use let’s encrypt, their root certificate is trusted by any system and your requested wildcard Certificate is trusted via chain of trust
That’s if you have a regular domain instead of.internal unless I’m mixing something. Topic of thread is .internal as if it were something new. Using a regular domain and public CA has always been possible.
They didn’t make this too be easy to use. They don’t give a shit about that. That isn’t their job in the slightest.
They reserved a TLD, that’s all.
You can use any TLD you want on your internal network and DNS and you have always been able to do that. It would be stupid to use an already existing domain and TLD but you absolutely can.
This just changes so that it’s not stupid to use .internal
People can talk about whatever they want whenever they want. The discussion naturally went to the challenges of getting non-self-signed certificates for this new TLD. That’s all.
This change gives you the guarantee that .internal domains will never be registered officially, so you can use them without the risk of your stuff breaking should ICANN ever decide to make whatever TLD you’re using an official TLD.
That scenario has happened in the past, for example for users of FR!TZBox routers which use fritz.box. .box became available for purchase and someone bought fritz.box, which broke browser UIs. This could’ve even been used maliciously, but thankfully it wasn’t.
Either ignore like I do or add a self signed cert to trusted root and use that for your services. Will work fine unless you’re letting external folks access your self hosted stuff.
@solrize@thehatfox get a free wildcard cert for your domain and use it just like any other. nothing new, nothing different. I have those running on LAN-only hosts behind a firewall and NAT with no port punching or UpNP or any ingress possible.
if you don’t want to run a private CA with automated cert distribution (also simple with ansible or a few tens of LOC in shell or python), the LetsEncrypt is trivial and costs nothing – still requires one to load the cert and key onto a server though, which is 2/3 of the work vs private CA cert management.
Quite literally my first thought. Great, but I can’t issue certs against that.
One of the major reasons I have a domain name is so that I can issue certs that just work against any and all devices. For resources on my network. Home or work, some thing.
To folks recommending a private CA, that’s a quick way to some serious frustration. For some arguably good reasons. On some devices I could easily add a CA to, others are annoying or downright bullshit, and yet others are pretty much impossible. Then that last set that’s the most persnickety, guests, where it’d be downright rude!
Being able to issue public certs is easily is great! I don’t use .local much because if it’s worth naming, it’s worth securing.
For self hosting at least, having your own CA is a pain in the ass to make sure everything is safe and that nobody except you has access to your CA root key.
I’m not saying it’s not doable, but it’s definitely a lot of work and potentially a big security risk if you’re not 100% certain of what you’re doing.
Just use only VPN to access your services behind the reverse proxy, if you want prevent unauthorised connections.
CA certificates are not here to prevent someone accessing a site, they are here, so that you can be sure, that the server you are talking to is really the one belonging to the domain you entered and to establish a tunnel in order to send the API calls (html, css, javascript etc.) and answers encrypted.
That’s the problem, if anyone somehow gets your root CA key, your encryption is pretty much gone and they can sign whatever they want with your CA.
It’s a lot of work to make sure it’s safe in a home setup.
You can just issue new certificates one per year, and otherwise keep your personal root CA encrypted. If someone is into your system to the point they can get the key as you use it, there are bigger things to worry about than them impersonating your own services to you.
What if I told you, businesses routinely do this to their own machines in order to make a deliberate MitM attack to log what their employees do?
In this case, it’d be a really targetted attack to break into their locally hosted server, to steal the CA key, and also install a forced VPN/reroute in order to service up MitM attacks or similar. And to what end? Maybe if you’re a billionaire, I’d suggest not doing this. Otherwise, I’d wonder why you’d (as in the average user) be the target of someone that would need to spend a lot of time and money doing the reconnaissance needed to break in to do anything bad.
They (the service that provides both web protection and logging) installs their own root certificate. Then creates certs for sites on demand, and it will route web traffic through their own proxy, yes.
It’s why I don’t do anything personal at all on the work laptop. I know they have logs of everything everyone does.
I’m talking about home hosting and private keys. Not businesses with people whose full time job is to make sure everything runs fine.
I’m a nobody and I regularly have people/bots testing my router. I’m not monitoring my whole setup yet and if someone gets in I would probably not notice until it’s too late.
So hosting my own CA is a hassle and a security risk I’m not willing to put work into.
The domain certificate is public and its key is private? That’s basically it, if anyone gets access to your key, they can sign with your name and generate certificates without your knowledge.
That’s my opinion and the main reason why I wouldn’t have a self hosted CA, maybe I’m wrong or misled, but it’s a lot of work to ensure everything is safe, only for a self hosted setup.
I found options like .local and now .internal way too long for my private stuff. So I managed to get a two-letter domain from some obscure TLD and with Cloudflare as DNS I can use Caddy to get Let’s Encrypt certs for hosts that resolve to 10.0.0.0/8 IPs. Caddy has plugins for other DNS providers, if you don’t want to go with Cloudflare.
It’s a domain with hosts that all resolve to private IP addresses. I don’t care if someone manages to see hosts like vaultwarden, cloud, docs or photos through enumeration if they all resolve to 10.0.0.0/8 addresses. Setting up a private resolver and private PKI is just too much of a bother.
Maybe browsers could be configured to automatically accept the first certificate they see for a given .internal domain, and then raise a warning if it ever changes, probably with a special banner to teach the user what an .internal name means the first time they see one
The main reason this will never happen is that the browser vendors make massive revenue and profit margins off of The Cloud and would really prefer that the core concept of a LAN just dies so you pay your rent to them.
Browsers barf at non https now. What are we supposed to do about certificates?
If you mean properly signed certificates (as opposed to self-signed) you’ll need a domain name, and you’ll need your LAN DNS server to resolve a made-up subdomain like
lan.domain.com
. With that you can get a wildcard Let’s Encrypt certificate for*.lan.domain.com
and all yourhttps://whatever.lan.domain.com
URLs will work normally in any browser (for as long as you’re on the LAN).Right, main point of my comment is that .internal is harder to use that it immediately sounds. I don’t even know how to install a new CA root into Android Firefox. Maybe there is a way to do it, but it is pretty limited compared to the desktop version.
You can’t install a root CA in Firefox for android.
You have to install the cert in android and set Firefox to use the android truststore.
You have to go in Firefox settings>about Firefox and tap the Firefox logo for a few times. You then have a hidden menu where you can set Firefox to not use its internal trust store.
You then have to live with a permanent warning in androids quick setting that your traffic might be captured because of the root ca you installed.
It does work, but it sucks.
This is not a new problem, .internal is just a new gimmick but people have been using .lan and whatnot for ages.
Certificates are a web-specific problem but there’s more to intranets than HTTPS. All devices on my network get a .lan name but not all of them run a web app.
You do not have to install a root CA if you use let’s encrypt, their root certificate is trusted by any system and your requested wildcard Certificate is trusted via chain of trust
That’s if you have a regular domain instead of.internal unless I’m mixing something. Topic of thread is .internal as if it were something new. Using a regular domain and public CA has always been possible.
They didn’t make this too be easy to use. They don’t give a shit about that. That isn’t their job in the slightest.
They reserved a TLD, that’s all.
You can use any TLD you want on your internal network and DNS and you have always been able to do that. It would be stupid to use an already existing domain and TLD but you absolutely can. This just changes so that it’s not stupid to use .internal
No one is saying it is their job.
Merely that using a TLD like .internal requires some consideration regarding ssl certificates.
But why are people even discussing that?
This is about an ICANN decision. TLS has nothing to do with that. Also you don’t really need TLS for self hosting. You can if you want though.
Because people can discuss whatever they like?
If you don’t like it just down vote it.
People can talk about whatever they want whenever they want. The discussion naturally went to the challenges of getting non-self-signed certificates for this new TLD. That’s all.
Nothing, this is not about that.
This change gives you the guarantee that
.internal
domains will never be registered officially, so you can use them without the risk of your stuff breaking should ICANN ever decide to make whatever TLD you’re using an official TLD.That scenario has happened in the past, for example for users of FR!TZBox routers which use
fritz.box
..box
became available for purchase and someone boughtfritz.box
, which broke browser UIs. This could’ve even been used maliciously, but thankfully it wasn’t.Either ignore like I do or add a self signed cert to trusted root and use that for your services. Will work fine unless you’re letting external folks access your self hosted stuff.
@solrize @thehatfox get a free wildcard cert for your domain and use it just like any other. nothing new, nothing different. I have those running on LAN-only hosts behind a firewall and NAT with no port punching or UpNP or any ingress possible.
if you don’t want to run a private CA with automated cert distribution (also simple with ansible or a few tens of LOC in shell or python), the LetsEncrypt is trivial and costs nothing – still requires one to load the cert and key onto a server though, which is 2/3 of the work vs private CA cert management.
Private CA is the only way for domains which cannot be resolved on the Internet
How do you propose to get LetsEncrypt to offer you a certificate for a domain name you do not and cannot control?
@JackbyDev Why would that be a question at all? Buy a domain name and take care of your dns records.
that’s an odd way to say that you don’t own any domains. that’s step one, but does it even need to be said?
You cannot buy .internal domains. That’s my point.
Quite literally my first thought. Great, but I can’t issue certs against that.
One of the major reasons I have a domain name is so that I can issue certs that just work against any and all devices. For resources on my network. Home or work, some thing.
To folks recommending a private CA, that’s a quick way to some serious frustration. For some arguably good reasons. On some devices I could easily add a CA to, others are annoying or downright bullshit, and yet others are pretty much impossible. Then that last set that’s the most persnickety, guests, where it’d be downright rude!
Being able to issue public certs is easily is great! I don’t use .local much because if it’s worth naming, it’s worth securing.
My Asus router is actually able to get a certificate and use DDNS which is really interesting.
Makes ya wonder what else it’s doing that for…
So you can access your router’s config page without blasting your password in plaintext or getting certificate warnings. It’s an optional feature.
Same thing we do with .local - “click here to proceed (unsafe)” :D
Set up my work’s network waay back on NT4. 0 as .local cuz I was learning and didn’t know any better, has been that way ever since.
You can set up your own CA, sign certs and distribute the root to every one of your devices if you really wanted to.
Yeah I know about that, I’ve done it. It’s just a PITA to do it even slightly carefully.
That sounds like a bad idea, you would need your CA and your root certs to be completely air gapped for it to be even remotely safe.
Why?
That’s a rather absolutist claim when you don’t know the orgs threat model.
For self hosting at least, having your own CA is a pain in the ass to make sure everything is safe and that nobody except you has access to your CA root key.
I’m not saying it’s not doable, but it’s definitely a lot of work and potentially a big security risk if you’re not 100% certain of what you’re doing.
No worse than protecting your ssh key. Just keep it somewhere safe.
Just use only VPN to access your services behind the reverse proxy, if you want prevent unauthorised connections.
CA certificates are not here to prevent someone accessing a site, they are here, so that you can be sure, that the server you are talking to is really the one belonging to the domain you entered and to establish a tunnel in order to send the API calls (html, css, javascript etc.) and answers encrypted.
That’s the problem, if anyone somehow gets your root CA key, your encryption is pretty much gone and they can sign whatever they want with your CA.
It’s a lot of work to make sure it’s safe in a home setup.
You can just issue new certificates one per year, and otherwise keep your personal root CA encrypted. If someone is into your system to the point they can get the key as you use it, there are bigger things to worry about than them impersonating your own services to you.
And additionally, you can sign intermediate Certificates reducing the risk even more, since you can revoke and re-issue new ones any time.
What if I told you, businesses routinely do this to their own machines in order to make a deliberate MitM attack to log what their employees do?
In this case, it’d be a really targetted attack to break into their locally hosted server, to steal the CA key, and also install a forced VPN/reroute in order to service up MitM attacks or similar. And to what end? Maybe if you’re a billionaire, I’d suggest not doing this. Otherwise, I’d wonder why you’d (as in the average user) be the target of someone that would need to spend a lot of time and money doing the reconnaissance needed to break in to do anything bad.
Ah, you mean they put the cert in a transparent proxy which logs all traffic? Neat idea, I should try it at home
They (the service that provides both web protection and logging) installs their own root certificate. Then creates certs for sites on demand, and it will route web traffic through their own proxy, yes.
It’s why I don’t do anything personal at all on the work laptop. I know they have logs of everything everyone does.
I’m talking about home hosting and private keys. Not businesses with people whose full time job is to make sure everything runs fine.
I’m a nobody and I regularly have people/bots testing my router. I’m not monitoring my whole setup yet and if someone gets in I would probably not notice until it’s too late.
So hosting my own CA is a hassle and a security risk I’m not willing to put work into.
Yeah that’s your situation. Some people are fine with it
As opposed to what, the domain certificate? Which can’t be air-gapped because it needs to be used by services and reverse proxies.
The domain certificate is public and its key is private? That’s basically it, if anyone gets access to your key, they can sign with your name and generate certificates without your knowledge. That’s my opinion and the main reason why I wouldn’t have a self hosted CA, maybe I’m wrong or misled, but it’s a lot of work to ensure everything is safe, only for a self hosted setup.
I found options like .local and now .internal way too long for my private stuff. So I managed to get a two-letter domain from some obscure TLD and with Cloudflare as DNS I can use Caddy to get Let’s Encrypt certs for hosts that resolve to 10.0.0.0/8 IPs. Caddy has plugins for other DNS providers, if you don’t want to go with Cloudflare.
Might be an idea to not use any public A records and just use it for cert issuance, and Stick with private resolvers for private use.
It’s a domain with hosts that all resolve to private IP addresses. I don’t care if someone manages to see hosts like vaultwarden, cloud, docs or photos through enumeration if they all resolve to 10.0.0.0/8 addresses. Setting up a private resolver and private PKI is just too much of a bother.
My set up is similar to this but I’m using wildcards.
So all my containers are on 10.0.0.0/8, and public dns server resolves *.sub.domain.com to 10.0.0.2, which is a reverse proxy for the containers.
Maybe browsers could be configured to automatically accept the first certificate they see for a given .internal domain, and then raise a warning if it ever changes, probably with a special banner to teach the user what an .internal name means the first time they see one
The main reason this will never happen is that the browser vendors make massive revenue and profit margins off of The Cloud and would really prefer that the core concept of a LAN just dies so you pay your rent to them.
Accept them