-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Headscale fails to activate clients with postgresql backend #764
Comments
Note for when this is tackled,
|
I seem to have encountered this problem, the client cannot join headscale with authkey, and the tailscale status keeps showing Logged out |
@QZAiXH |
This issue is stale because it has been open for 180 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
I'm also encountering this issue, clients status:
Even though I'm connected and have an ip. I'm using a reusable key (for a router device). |
I also encountered this problem. |
Like the reporter, it works fine with sqlite. I was trying to move to postgres for HA, but encountered this issue and went back to sqlite. |
Same issue with OpenWrt router as a tailscale client. Can register cleent with sqllite. |
Hate to leave a "bump" comment, but FYI this issue occurs on Postgres 15 as well. |
Can confirm this happens on Postgres 16 also. I have collected logs from headscale whilst trying to join with a reusable preauth key (log level debug). Also in the gist is the client status json output, and the server node info json output, after the registration. I redacted/obfuscated some data. Clientside interaction looks like so: $ sudo tailscale up --reset --login-server https://headscale.example.com --timeout 20s --authkey xyz123
timeout waiting for Tailscale service to enter a Running state; check health with "tailscale status"
$ sudo tailscale status
Logged out.
Log in at: https://headscale.example.com/register/nodekey:abc123 Can confirm that switching to SQLite resolves the issue. Perhaps it is a collation issue, wherein some comparison is returning different results depending on the engine? Happy to do more testing. |
I always wonder why authkey is not working on my Headscale installation until i found this issue. For now, i auth with openid, move this nodes to a fake user that can not login and set expiration date in the database to a high value to avoid expiration. For me, thats the only way to avoid expiration on server/subnet gateways. |
This is still an issue, two years later. |
As mentioned in #2087, it takes a lot more effort than initially anticipated to support multiple database engines and Postgres is not really a priority as the benefits to scaling something like headscale is marginal. That does not mean we will never attempt to resolve it, it just means that we have a bunch of other things that we consider more important to bring headscale forward. Often people create issues or bring up that you need Postgres to scale headscale beyond X nodes, while it is currently true that you might be able to have 10-20% more node with the current code, the main bottlenecks are in the headscale code and not dependent on the database. While we will work hard to not break postgresql or regress it, I would consider our support for it "best effort" and if your looking to run Headscale in a more serious manner, I would choose SQLite. |
@kradalby Everything you said makes a lot of sense, and I do not intend to argue any of your points, only to add another perspective. My interest in using a db other than sqlite is not for "scaling" as much as fault tolerance. Consider a setup where HS is running in a single VM. If that VM is destroyed, the tailnet will suffer while it's recreated. With a version-controlled policy file and good monitoring coupled with a CI pipeline, this issue can be resolved in minutes. Compare that scenario to a setup with two VMs running headscale, a primary and a backup, both connected to a managed database with its own redundancy. If the primary is destroyed, the IP address can be swapped to the backup VM, as just one of several options to swap traffic in less time. |
@alexfornuto Its fair, I understand that people have different solutions to recovery and HA and solutions they are more familiar with. That said, for all the things mentioned, SQLite has excellent backup/streaming/cold copy solutions like litestream, which I use with my Headscale(s). We are not removing it, but likely not investing in it. I think a sensible way to look at the "investing" or optimisation part, is to think, if we find that we can make a change that will benefit SQLites performance, we will implement them and sacrifice postgres performance, not implement two solutions. As a side note, we have also started to have an increase in special cases for migrating both databases, which also is eating out of our dev time. |
I agree with the sentiments and statements here but would like to highlight that using postgresql is not an option at all due to this issue. I personally tried to use psql because I already had it set up, but using sqlite instead was fine. The problem is that the documentation states that postgresql should work, and so I did waste a good amount of time trying to figure out why it did not work before giving up on it. At the very least, it may be worth amending the documentation to state that psql support is best effort and not as well tested as sqlite. |
I updated the config with some notes in #2091, but I agree, that is fair. I will try to assess this issue next week and evaluate if the work will result in a fix or documentation of known limitations. I know people out there are running Postgres, so it is strange that not everyone runs into this, maybe they dont use preauthkeys. |
I'll chime in as another postgres user: I understand that there are great options to backup/manage SQLite, but for anyone running on the major cloud providers (AWS, GCP, Azure, etc) they all have a managed DB solution that speaks postgresql protocol, so it's dramatically easier to set up a database with proper backup, etc. with anything that can use that. Deploying headscale as a stateless container, with external state in a DB is a really easy way to manage it, and it would be a shame to lose that. I understand that it's an increased burden of maintenance, and I'm happy to help with testing and fixes for postgres if it helps alleviate the problem a little. |
This is my situation exactly. I'm already using managed databases for other self-hosted services. Litestream does look like a viable solution for sqlite, but it's also an additional burden in terms of having to learn, deploy, and maintain another system just for use by Headscale. Ultimately this disqualifies Headscale as a viable alternative for use in the network I administer professionally. I completely understand that this is not a priority for the HS dev team, and it's not my intention to argue that point. I just want to make sure my POV is properly articulated. |
Many thanks @mpoindexter! EDIT: P.S. Is there any chance of the fix being backported to the current stable release? |
@alexfornuto I would doubt it makes sense to backport, but I think just ensuring your headscale process runs using the UTC timezone should function as a workaround. |
@mpoindexter I will likely do that if we decide to proceed with headscale in that environment. FWIW, it would make sense to me to backport it since a production deployment requiring a more reliable db backend than sqlite would also likely require a stable release (my environment does), and v0.23 is still in beta. |
Bug description
Tailscale clients authenticate successfully with headscale when headscale is configured to use postgres but then get stuck in a loop and keep refreshing keys.
More specifically,
In case of sqlite,
In case of postgres,
0001-01-01 05:53:28+05:53:28
.Client never sends a payload with read_only=false.
To Reproduce
Try to register any tailscale client.
Context info
The text was updated successfully, but these errors were encountered: