https://argo-cd.readthedocs.io/en/stable/
Declarative GitOps Continuous Delivery of applications on Kubernetes.
- Key Points
- Install Quickly
- Find the ArgoCD default admin password
- ArgoCD templates
- ArgoCD & Kubernetes Scripts
- ArgoCD Kustomize + Helm Integration for GitOps
- GitOps ArgoCD itself
- GitHub Webhooks Integration
- ArgoCD CLI
- Clusters
- Applications
- Azure AD Authentication for SSO
- Google Authentication for SSO
- GitHub Webhooks
- CI/CD - Jenkins CI -> ArgoCD Integration
- Prometheus metrics + Grafana dashboard
- Notifications
- Performance Tuning
- Troubleshooting
- Good UI
- Kubernetes native - everything is defined in k8s yamls via CRDs so easy to GitOps ArgoCD itself
- Can manage multiple Kubernetes clusters (although you might want to split this for scaling)
- Project and Applications configurations must be installed to the
argocd
namespace for ArgoCD to pick them up - Sync only detects / replaces parts that are different from the manifests in Git
- if you add / change a field that is not in the Git manifests then ArgoCD won't change it as it doesn't change the entire object
- Projects restrict Git source, destination cluster + namespace, permissions
- Applications in project deploy k8s manifests from Git repo
- Active community
argocd-server
- API server & UIargocd-application-controller
- monitors live k8s vs repoargocd-repo-server
- maintains cache of git repo + generates k8s manifests (kustomize/helm)
Ready made config to deploy to Kubernetes - will immediately bring up a cluster:
Deploy from the overlay directory and edit the ingress*.yaml
with the URL of the FQDN you want to use (you should
have configured ingress and cert-manager to get the SSL url available).
kubectl -n argocd get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 --decode
DevOps-Bash-tools - Kubernetes section
Have ArgoCD use Kustomize to materialize Helm charts, patch and combine them with other yamls such as custom ingresses and manage them all in a single ArgoCD app in a GitOps fashion:
Revision control and diff all ArgoCD configuration by defining it all in YAMLs using native K8s objects defined by CRDs.
ArgoCD Self-Managing App Config
App-of-Apps Config - have ArgoCD apps automatically found and loaded from any yamls found in the /apps
directory
App-of-Projects Config - have projects automaticaly found and loaded from any yamls in the projects/
directory
GitHub Webhooks Integration Template
On Mac:
brew install argocd
or download from server for either Mac or Linux:
os="$(uname -s | tr '[:upper:]' '[:lower:]')"
mkdir -p -v ~/bin
curl -L -o ~/bin/argocd "https://$ARGOCD_HOST/download/argocd-$os-amd64" &&
chmod +x ~/bin/argocd
export PATH="$PATH:$HOME/bin"
or script in in DevOps-Bash-tools repo which figures out the OS and downloads the latest CLI version binary from GitHub:
install_argocd.sh
Get the initial admin password:
PASSWORD="$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 --decode)"
ARGOCD_HOST
must not have an https://
prefix
argocd login "$ARGOCD_HOST" --username admin --password "$PASSWORD" --grpc-web
or use the argocd-grpc.domain.com
url if you've set up the extra ingress, but this didn't work in testing.
Change admin password - can delete the obsolete argocd-initial-admin-secret
after that as its no longer used:
argocd account update-password
Generate long lived JWT
token
(this environment variable keeps the CLI authenticated).
Requires enabling apiKey
permission using
cm.users.patch.yaml.
export ARGOCD_AUTH_TOKEN="$(argocd account generate-token)"
Create an SSH key for Private Repo access:
ssh-keygen -f ~/.ssh/argocd_github_key
Load it to a Kubernetes secret which is referenced from cm-repos.patch.yaml:
kubectl create secret generic github-ssh-key -n argocd --from-file=private-key=$HOME/.ssh/argocd_github_key
Add clusters to deploy to.
If you're only deploying to the local cluster where ArgoCD UI is running then you can just use ``
Find cluster's context name from your local kubeconfig:
kubectl config get-contexts -o name
Add the cluster from the kubectl cluster context configuration:
argocd cluster add "$context_name" # --grpc-web avoids warning message connecting through https ingress
Installs argocd-manager
to kube-system
namespace in this cluster with admin ClusterRole
.
Add Applications to deploy to clusters.
Create and apply an argocd-app.yaml and let ArgoCD deploy it.
You can also deploy an app imperatively via the CLI, although this should not be done for serious work which should go through GitOps using above template.
kubectl create ns guestbook
argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git \
--path guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace guestbook \
--grpc-web
--dest-server https://kubernetes.default.svc
-
means in-cluster. Specify external Master URL for deploying to other clusters.
List all ArgoCD apps:
argocd app list
Get info on the specific guestbook
app we just deployed:
argocd app get guestbook
Trigger a sync of the guestbook
app.
argocd app sync guestbook
Override image version to deploy for Dev / Staging environments:
This command only overrides this exact image
with the new tag
:
argocd app set "$name" --kustomize-image "eu.gcr.io/$CLOUSDK_CORE_PROJECT/$image:$tag"
Azure AD Authentication Config Template
For CLI access:
- add section for
Mobile & Desktop applications
with URIhttps://$ARGOCD_HOST/auth/callback
- App Registration section:
- Authentication page:
- set
Allow public client flows
toYes
at the bottom - to work around this issue
- set
- Authentication page:
Medium article on Azure AD auth
Remember: don't git commit the argocd-cm
configmap addition of the dex.config
key on that page which contains the clientID
and clientSecret
.
It's not necessary to expose this in Git as ArgoCD self-management won't strip out the field since there is no such field in the Git configmap.
kubectl logs -f -n argocd deploy/argocd-server
If you see this:
level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=Get grpc.service=cluster.SettingsService grpc.start_time="2024-03-02T01:41:21Z" grpc.time_ms=100.663 span.kind=server system=grpc
Even a browser hard refresh doesn't solve it.
A restart the ArgoCD server pod fixes it:
kubectl rollout restart deploy/argocd-server
Seems like a bug.
kubectl logs -f -n argocd deploy/argocd-server
If you see this error:
level=warning msg="Failed to verify token: failed to verify token: Failed to query provider
\"https://argocd-production.domain.co.uk/api/dex\": Get \"https://argocd-production.domain.co.uk/api/dex/.well-known/openid-configuration\": dial tcp 10.x.x.x:443: i/o timeout"
There is no reason the pods shouldn't be able to connect to the internal ingress as all private IPs are allows in the config:
After brain racking, it turns out a reboot of the argocd-server pod after Dex configuration solves it:
kubectl rollout restart deploy/argocd-server
I have no explanation for this behaviour other than it's a probable bug that gets solved by a resetting the argocd-server state.
Two options in argocd-rbac-cm
are given in rbac-cm.patch.yaml:
- find the group id and assign a role to assign in the policy line (preferred)
- change
policy.default: role:readonly
topolicy.default: role:admin
(allows all users to click everything, but most will be reset by GitOps ArgoCD itself except for the Git repo connector)
For faster triggers than polling GitHub repo:
Official Doc - Automation from CI Pipelines
Update the image version in a Git repo which ArgoCD watches.
If using Kustomize, this is the easiest way to update the version number, Jenkins can run this in shell steps, like this Jenkins Shared Library - gitKustomizeImage.groovy:
kustomize edit set image eu.gcr.io/myimage:v2.0
git add . -m "Jenkins updated myimage to v2.0"
git push
Ensure ArgoCD CLI is configured and authenticated via these environment variables:
export ARGOCD_SERVER=argocd.mycompany.com
export ARGOCD_AUTH_TOKEN=<JWT token generated from project> # further up under setting up CLI
Trigger a sync of the ArgoCD app to not wait for it to detect the change, and have your CI/CD pipeline wait for the result, like this Jenkins Shared Library - argoDeploy.groovy and argoSync.groovy:
argocd app sync "$app"
argocd app wait "$app"
See the HariSekhon/Jenkins Shared Library for more production code related to ArgoCD, Docker, GCP and other technologies.
If you have a lot of applications, increase the number of replicas and set the corresponding environment variables:
Upgrade to Kustomize 4 - see repo-server.kustomize.patch.yaml for how to download a newer version than ArgoCD bundles.
- UI Refresh button drop down -> hard refresh
- delete the /tmp cache in
argocd-repo-server
pod:
pod=$(kubectl get po -n argocd -o name -l app.kubernetes.io/name=argocd-repo-server)
kubectl exec -ti -n argocd "$pod" -- sh -c 'rm -rf /tmp/*'
#kubectl delete -n argocd "$pod"
Ported from private Knowledge Base page 2021+