As part of Tock's compliance process, egress filtering is set up for cloud.gov deployments of Tock. Specifically, Tock fulfills the NIST 800-53 rev5 SC-7 control, which states:
Connect to external networks or systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security and privacy architecture.
Accordingly, we have configured a Caddy proxy with an allow list and deny list. This proxy configuration rejects all external connections to all sites save for these exceptions:
uaa.fr.cloud.gov
: The cloud.gov UAA server which in turn uses GSA SecureAuth for authentication.google-analytics.com
: DAP, for web app analyticsapi.newrelic.com
: The New Relic REST API endpoint which is used by thenewrelic-admin
tool to record deploymentsgov-collector.newrelic.com
: The FedRAMP-compliant New Relic APM collector endpoint, used by the New Relic Python agent
cloud.gov allows configuration of egress traffic controls on a per-space basis only. Also, different network security groups are required for the proxy and application. This is why a separate space is required for the proxy.
Updating Tock's egress proxy settings is a rarely performed, highly manual process that requires rebuilding the proxy configuration in a local development environment. If possible, seek an administrator of this repo to pair with you as you make changes.
The following instructions use the staging egress proxy as an example.
Pull the current version of the Caddy proxy application from GSA-TTS/cf-egress-proxy. Refer to its documentation for more information about local development.
If you have not previously cloned the repo, do so:
git clone git@github.com:GSA-TTS/cg-egress-proxy.git
If you have previously cloned the repo, ensure you are working from the current version with git:
- Stash or delete any local changes
- Check out the
main
branch - Pull the
main
branch from upstream
cf login -a api.fr.cloud.gov --sso
If you are setting up a new egress proxy from scratch, create a new cloud.gov space:
cf create-space staging-egress -o gsa-18f-tock
cf target -s staging-egress
Copy these files from from your local tock
repo into your cf-egress-proxy
repo:
Delete the code comments from tock.vars.yml
.
In your cf-egress-proxy
repo, manually set the username
and password
values in tock.vars.yml
.
Use the uuidgen
command to create a new, random username and password. Paste each into the vars file for the appropriate key.
Retrieve the existing proxy username and password from the deployed egress proxy application:
cf env staging-egress | grep PROXY_USERNAME
cf env staging-egress | grep PROXY_PASSWORD
Paste each value into the vars file for the appropriate key.
Push the egress proxy application to your space.
cf target -s staging-egress
cf push --vars-file tock.vars.yml
SSH into the proxy application's container to make sure that it is running and restricting URLs as advertised.
cf ssh staging-egress -t -c "/tmp/lifecycle/launcher /home/vcap/app /bin/bash 0"
# from the staging-egress terminal
# test that it is blocking egress appropriately
$ curl https://18f.gsa.gov
> curl: (56) Received HTTP code 403 from proxy after CONNECT
# test that it is allowing egress appropriately
$ curl https://google-analytics.com
> (html response)
Once the egress proxy looks good, you will need to set the proxy environment variable on Tock staging. Use the proxy path from the egress space.
cf target -s tock-staging
# enable tock staging to talk to the egress server
cf add-network-policy tock-staging staging-egress -s staging-egress --protocol tcp --port 61443
# set an environment variable with the egress_proxy path
cf set-env tock-staging egress_proxy https://<username>:<password>@<egress-host>.apps.internal:61443
# restage the application so it can use the variable
cf restage tock-staging
SSH into the Tock staging space and confirm with curl
that traffic out is being vetted by staging-egress.
cf ssh tock-staging -t -c "/tmp/lifecycle/launcher /home/vcap/app /bin/bash 0"
# from the tock-staging terminal
# test that it is blocking egress appropriately
$ curl https://18f.gsa.gov
> curl: (56) Received HTTP code 403 from proxy after CONNECT
# test that it is allowing egress appropriately
$ curl https://google-analytics.com
> (html response)
To troubleshoot end-to-end traffic, you may want to start with the egress proxy logs:
cf logs staging-egress --recent
If network calls from the Tock application are reaching the egress proxy, you should see log lines from the proxy indicating whether the calls were allowed or denied.
If a call from the Tock application is behaving unexpectedly (i.e. failing when it should be allowed, or succeeding when it should be denied):
- If the proxy logs show that the proxy is processing the call, double-check the
proxyallow
andproxydeny
settings in tock.vars.yml. - If the proxy log doesn't contain any record of the call, double-check the URL in the
egress_proxy
environment variable and the network policy for the Tock application.
The .profile file (see the relevant documentation) is configured to export the environmental variables http_proxy
and https_proxy
to whatever egress_proxy
is set to. This allows us to update cloud.gov buildpacks and build the application itself without the proxy active. In other words, the proxy is only active once the application has booted up.
It additionally exports a NEW_RELIC_PROXY_HOST
variable set to the value of egress_proxy
. This variable is required by the New Relic Python agent and newrelic-admin
tool.
The Python certifi library does not pick up on system-wide certificate authority files automatically. Instead, we have configured manifest files to explicitly set the environment variable REQUESTS_CA_BUNDLE
so that Python libraries, including certifi
, will use these certificates. If we do not, then all connections to the proxy are considered untrusted (cloud.gov specific certificates are in /etc/cf-system-certificates
and replicated to /etc/ssl/certs/ca-certificates.crt
).