-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Authentication/Authorization microservice #1796
Conversation
@humphd is attempting to deploy a commit to a Personal Account owned by @Seneca-CDOT on Vercel. @Seneca-CDOT first needs to authorize it. |
This pull request is being automatically deployed with Vercel (learn more). 🔍 Inspect: https://vercel.com/humphd/telescope/hjpejts59 |
@chrispinkney I've added you to the reviewers list too, since you blogged that you're using an oauth2 flow in your systems project, and since you're going to connecting code to mine with the user service. |
I get this when I try to run
Output: Jest dev-server output:
npx: installed 23 in 1.858s
[Jest Process Manager] Starting up http-server, serving src/api/auth/test/e2e
Available on:
[Jest Process Manager] http://127.0.0.1:8888
http://192.168.2.17:8888
http://172.21.0.1:8888
Hit CTRL-C to stop the server
Error: Host system is missing dependencies!
Missing libraries are:
libwebp.so.6
libenchant.so.1
libicui18n.so.66
libicuuc.so.66 Tested on Linux distros: Manjaro and Fedora 32 |
@manekenpix playwright needs various libs to run the libraries. They have a docker container that lists them: https://github.com/microsoft/playwright/blob/master/utils/docker/Dockerfile.bionic I wonder how we should deal with this? Via update to the docs for installing the dev env? It's interesting that it runs OK in CI on GitHub but not your distro. |
As long as we can run the Docker tests, I believe that is good enough. No point adding additional dependencies to the development environment which increase it's size. Thoughts? |
@manekenpix is going to test more to see what those missing libs are all about. We do need to figure out at least a docs update to tell people that we use them for e2e tests. In theory, running e2e tests only in CI is probably OK (e.g., not everyone needs to do it locally) |
app.use( | ||
// TODO: should use RedisStore in prod | ||
session({ | ||
secret: process.env.SECRET || `telescope-has-many-secrets-${Date.now()}!`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds secure
Overview
Our current authentication/authorization system is designed for a monolithic architecture, where the whole app (back-end and front-end) are served from the same origin (i.e.,
https://domain:port
is an origin, https://telescope.cdot.systems:443 is our prod origin).It works like this:
/login
route/login/callback
in our server, and a session cookie is set with their user info (we get that back with this request).Moving to Distributed Auth
We want to split the app up and run it on different origins. For example, microservices, Vercel, localhost, etc. That distributed architecture breaks our security model, since we can't share that cookie across our network of connected apps and services.
We need a way to let a user authenticate, then come back to another part of our system and prove that they are authenticated. A common way to do this type of thing is with OAuth2. OAuth2 is for authorization vs. authentication. It lets systems decide if a user is authorized to do something, and assumes authentication happened somewhere else.
I spent a long time looking into OAuth2 for our needs, but it's overly complicated for our (current) use cases. Because we aren't authorizing third-party apps (e.g., like GitHub or Google do with external apps that can connect to your account), and because we already have an in-house authentication system connected to our authorization needs, we can build a simpler flow.
A Plan
This is the beginnings of a new auth service. It does a few things:
Here's a rough sketch of what it will look like:
localStorage
for later.api.telescope.cdot.systems/auth/login?redirect_uri=https://telescope.cdot.systems/&state=a3f1b3413
. The URL contains two things: 1)redirect_uri
containing a URL pointing back to the entry point of the front-end app; 2) some randomstate
. The latter is used as a ride-along value on all the redirects that are about to take place, and lets the client know at the end that nothing was tampered with in between.login?redirect_uri=https://telescope.cdot.systems/&state=a3f1b3413
) and stores theredirect_uri
andstate
in the session. It then prepares a SAML message for this user to authenticate, and redirects them to the SSO identify provider server./login/callback
and examines whether or not the user was authenticated. If they were, we create an access token (JWT) and the request is redirected back to the original app at theredirect_uri
: https://telescope.cdot.systems?access_token=...jwt-token-here...&state=...original-state-here...access_token
andstate
. It confirms thestate
is what it expects (e.g., compares to what's inlocalStorage
). The token is then used with all subsequent API requests to our microservices.Authorization: bearer <jwt token here>
Easy, right? I'm skipping some details, but that's the main thrust of what's happening here.
Running and Testing
You can run and test this (yes, I even wrote unit and end-to-end browser tests!). Both require you to run a fake SAML SSO Login server via docker.
Tests
For running the tests:
cd src/api
docker-compose -f docker-compose-tests.yml up
cd auth
npm run test:unit
npm run test:e2e
If you change line 10 of
src/api/auth/test/e2e/e2e.test.js
tobrowser = await chromium.launch({ headless: false, slowMo: 1000 });
you can see the browsers start and run the e2e tests visually. It's possible these will fail on your machine due to timing/resource limits. I am still getting the feel for how far I can push the test runner.Manually
You can play with this manually too. I made a tiny static HTML web page that lets you use it. You need to run the auth server and login container together in docker:
cd src/api
docker-compose -f docker-compose-api.yml up --build auth login
cd auth
npm run test:manual
Now you can navigate to the various servers:
To try logging in:
Login
user1
anduser1pass
access_token
andstate
on the URL. Copy the access_token (eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwOi8vYXV0aC5kb2NrZXIubG9jYWxob3N0IiwiYXVkIjoiaHR0cDovL2xvY2FsaG9zdDo4ODg4LyIsInN1YiI6InVzZXIxQGV4YW1wbGUuY29tIiwiaWF0IjoxNjEzNzg1MDk1LCJleHAiOjE2MTM3ODg2OTV9.53s-dCQX5xPAryTvpEpV6Vo7NAT0jHSf4sUJkh0EXxU
)That is the authorization info we'll pass around to different servers. We'll sign it cryptographiclly, so it can't be faked from some other source. We'll also add more claims to it (e.g., user vs. admin, name).
I need to add more production-level implementation to this, but it's probably ready to land and people can play with it. I also haven't connected the tests to CI properly yet.
I also think I want to integrate this with the work in #1642 that @chrispinkney is doing (i.e., it's tightly bound to the idea of authorization, so it might make sense to do it all in one service).
I wanted to get this up now because the other microservices are going to need it, and I didn't want to block progress for others during 1.8.
I'll do a proper demo of all this next week during our calls. Let me know if you have questions or thoughts.