Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable arm again #3852

Merged
merged 1 commit into from
Jun 27, 2019
Merged

Enable arm again #3852

merged 1 commit into from
Jun 27, 2019

Conversation

aledbf
Copy link
Member

@aledbf aledbf commented Mar 5, 2019

What this PR does / why we need it:

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

Special notes for your reviewer:

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Mar 5, 2019
@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 5, 2019
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Mar 5, 2019
@aledbf
Copy link
Member Author

aledbf commented Mar 5, 2019

Issue building the new image:

Scanning dependencies of target Boost
make[5]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
make[5]: Entering directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
[ 12%] Creating directories for 'Boost'
[ 25%] Performing download step (download, verify and extract) for 'Boost'
-- Downloading...
   dst='/root/.hunter/_Base/Download/Boost/1.69.0-p0/2539b07/v1.69.0-p0.tar.gz'
   timeout='none'
-- Using src='https://github.com/hunter-packages/boost/archive/v1.69.0-p0.tar.gz'
-- verifying file...
       file='/root/.hunter/_Base/Download/Boost/1.69.0-p0/2539b07/v1.69.0-p0.tar.gz'
-- Downloading... done
-- extracting...
     src='/root/.hunter/_Base/Download/Boost/1.69.0-p0/2539b07/v1.69.0-p0.tar.gz'
     dst='/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Source'
-- extracting... [tar xfz]
-- extracting... [analysis]
-- extracting... [rename]
-- extracting... [clean up]
-- extracting... done
[ 37%] No patch step for 'Boost'
[ 50%] Performing update step for 'Boost'
[ 62%] Performing configure step for 'Boost'
Dummy patch command
Building Boost.Build engine with toolset gcc... tools/build/src/engine/bin.linuxarm/b2
Detecting Python version... 2.7
Detecting Python root... /usr
Unicode/ICU support for Boost.Regex?... /usr
Generating Boost.Build configuration in project-config.jam...

Bootstrapping is done. To build, run:

    ./b2
    
To adjust configuration, edit 'project-config.jam'.
Further information:

   - Command line help:
     ./b2 --help
     
   - Getting started guide: 
     http://www.boost.org/more/getting_started/unix-variants.html
     
   - Boost.Build documentation:
     http://www.boost.org/build/doc/html/index.html

[ 75%] No build step for 'Boost'
[ 87%] Performing install step for 'Boost'
Unable to load Boost.Build: could not find "boost-build.jam"
---------------------------------------------------------------
BOOST_ROOT must be set, either in the environment, or 
on the command-line with -sBOOST_ROOT=..., to the root
of the boost installation.

Attempted search from /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Source up to the root
at /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/share/boost-build
and in these directories from BOOST_BUILD_PATH and BOOST_ROOT: /usr/share/boost-build, /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Install.
Please consult the documentation at 'http://www.boost.org'.
make[5]: *** [CMakeFiles/Boost.dir/build.make:74: Boost-prefix/src/Boost-stamp/Boost-install] Error 1
make[5]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
make[4]: *** [CMakeFiles/Makefile2:73: CMakeFiles/Boost.dir/all] Error 2
make[4]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'
make[3]: *** [Makefile:84: all] Error 2
make[3]: Leaving directory '/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build'

[hunter ** FATAL ERROR **] Build step failed (dir: /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost
[hunter ** FATAL ERROR **] [Directory:/root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/Boost]

------------------------------ ERROR -----------------------------
    https://docs.hunter.sh/en/latest/reference/errors/error.external.build.failed.html
------------------------------------------------------------------

CMake Error at /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_error_page.cmake:12 (message):
Call Stack (most recent call first):
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_fatal_error.cmake:20 (hunter_error_page)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_download.cmake:614 (hunter_fatal_error)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/Boost/hunter.cmake:381 (hunter_download)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_add_package.cmake:62 (include)
  build/cmake/DefineOptions.cmake:110 (hunter_add_package)
  CMakeLists.txt:58 (include)


-- Configuring incomplete, errors occurred!
See also "/root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/thrift/Build/thrift-Release-prefix/src/thrift-Release-build/CMakeFiles/CMakeOutput.log".
make[2]: *** [CMakeFiles/thrift-Release.dir/build.make:110: thrift-Release-prefix/src/thrift-Release-stamp/thrift-Release-configure] Error 1
make[1]: *** [CMakeFiles/Makefile2:73: CMakeFiles/thrift-Release.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

[hunter ** FATAL ERROR **] Build step failed (dir: /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/thrift
[hunter ** FATAL ERROR **] [Directory:/root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/thrift]

------------------------------ ERROR -----------------------------
    https://docs.hunter.sh/en/latest/reference/errors/error.external.build.failed.html
------------------------------------------------------------------

CMake Error at /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_error_page.cmake:12 (message):
Call Stack (most recent call first):
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_fatal_error.cmake:20 (hunter_error_page)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_download.cmake:614 (hunter_fatal_error)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/projects/thrift/hunter.cmake:70 (hunter_download)
  /root/.hunter/_Base/Download/Hunter/0.23.126/ccaf37e/Unpacked/cmake/modules/hunter_add_package.cmake:62 (include)
  CMakeLists.txt:50 (hunter_add_package)


-- Configuring incomplete, errors occurred!
See also "/tmp/build/jaeger-client-cpp-cdfaf5bb25ff5f8ec179fd548e6c7c2ade9a6a09/.build/CMakeFiles/CMakeOutput.log".
root@343a9f2f19b8:/tmp/build/jaeger-client-cpp-cdfaf5bb25ff5f8ec179fd548e6c7c2ade9a6a09/.build# 
root@343a9f2f19b8:/tmp/build/jaeger-client-cpp-cdfaf5bb25ff5f8ec179fd548e6c7c2ade9a6a09/.build# cd /root/.hunter/_Base/ccaf37e/dd0a8c3/10e86c7/Build/Boost/Build

@aledbf
Copy link
Member Author

aledbf commented Apr 19, 2019

Waiting for feedback in jaegertracing/jaeger-client-cpp#151

@alexellis
Copy link

Looking forward to seeing this 👍

Note: you have some rebase conflicts showing up?

@codecov-io
Copy link

codecov-io commented Jun 26, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@ecce3fd). Click here to learn what that means.
The diff coverage is 16.66%.

Impacted file tree graph

@@           Coverage Diff            @@
##             master   #3852   +/-   ##
========================================
  Coverage          ?   57.9%           
========================================
  Files             ?      87           
  Lines             ?    6544           
  Branches          ?       0           
========================================
  Hits              ?    3789           
  Misses            ?    2324           
  Partials          ?     431
Impacted Files Coverage Δ
internal/ingress/controller/template/template.go 83.93% <16.66%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ecce3fd...ddffa2a. Read the comment docs.

@aledbf
Copy link
Member Author

aledbf commented Jun 26, 2019

Still getting build errors https://gist.github.com/aledbf/62eee376ec5859bb32f97489c9242ded

@aledbf
Copy link
Member Author

aledbf commented Jun 26, 2019

@alexellis thank you for asking the state of this PR. The error is located in the jaeger tracing. Right now I am testing a conditional build, omitting this plugin for arm. If this works I can re-enable arm with the caveat that the opentracing feature (for jaeger) not be available.

@aledbf aledbf force-pushed the arm branch 3 times, most recently from ec82c3a to bf0a814 Compare June 26, 2019 18:18
@aledbf
Copy link
Member Author

aledbf commented Jun 26, 2019

ok, this is working now. I am building the final version of the image now.

Screenshot from 2019-06-26 14:03:14

@alexellis I will post the link of a docker image of the ingress controller so you can test it

@aledbf
Copy link
Member Author

aledbf commented Jun 27, 2019

I wonder if users use lua-resty-waf at all. My gut says we can drop support for it.

I want to add a new route to the ingress controller (Go) to dump a JSON object with stats of the ingress controller:

  • ingress controller version
  • arch
  • k8s API server version
  • informers stats
    • number of ingresses and paths
    • number of configmaps
    • number of secrets
    • name of annotations being used and count
    • number of configmap used in annotations (the informer could have 1000 configmaps and only 2 being referenced)
  • enabled features
  • number of workers
  • ram utilization

Then the user can upload to a new google form (not sure if supports uploads) and then decide if we can do some cleanup using two or three releases for deprecation and removal

@k8s-ci-robot k8s-ci-robot merged commit 2586542 into kubernetes:master Jun 27, 2019
@aledbf aledbf deleted the arm branch June 27, 2019 03:49
@alexellis
Copy link

I would love to test this. I'm currently installing and operating Nginx through the helm chart.

Can you provide a helm command that I can use?

@aledbf
Copy link
Member Author

aledbf commented Jun 27, 2019

@alexellis something like

helm install \
    --name nginx-ingress stable/nginx-ingress \
    --namespace arm \
    --set rbac.create=true \
    --set controller.service.type=NodePort \
kubectl --namespace arm set image deployment/nginx-ingress-controller \
  nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:dev

@MattJeanes
Copy link

@aledbf apologies if this error is unrelated but I gave this a go (your exact commands) and the pod output was this (raspberry pi 3b):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    dev
  Build:      git-5ee82bd08
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
 I0627 22:33:23.953859       6 flags.go:194] Watching for Ingress class: nginx
W0627 22:33:23.954592       6 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0627 22:33:23.974962       6 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0627 22:33:23.976149       6 main.go:196] Creating API client for https://10.96.0.1:443
I0627 22:33:24.154351       6 main.go:240] Running in Kubernetes cluster version v1.14 (v1.14.0) - git (clean) commit 641856db18352033a0d96dbc99153fa3b27298e5 - platform linux/arm
I0627 22:33:24.175474       6 main.go:100] Validated nginx/nginx-ingress-default-backend as the default backend.
I0627 22:33:28.713949       6 main.go:111] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0627 22:33:28.726154       6 main.go:131] v1.14.0
W0627 22:33:28.814667       6 store.go:624] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I0627 22:33:28.871539       6 nginx.go:280] Starting NGINX Ingress controller
E0627 22:33:29.987585       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:31.001687       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:32.015138       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:33.027626       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:33.325338       6 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 22:33:34.038846       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:35.071120       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:36.085533       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:37.097822       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:38.109134       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:39.123038       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:39.662185       6 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 22:33:40.136173       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:41.153777       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:42.166882       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:43.176737       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:43.281129       6 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 22:33:44.188361       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:45.205556       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 22:33:46.216230       6 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope

@aledbf
Copy link
Member Author

aledbf commented Jun 27, 2019

@MattJeanes no, this is an issue with the helm chart if you run a k8s cluster > v1.14.0 related to #4127

To fix this please run the next patch command to add the new roles compatible with the new API.

kubectl patch --namespace ingress-nginx role nginx-ingress-role --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'
kubectl patch --namespace ingress-nginx role nginx-ingress-role --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

After running that command delete the pod. The new one will have the new permissions

@MattJeanes
Copy link

MattJeanes commented Jun 27, 2019

Had to modify those commands slightly but still no dice. Not sure if your link was correct?

Here are exact commands used:

helm install \
    --name nginx-ingress stable/nginx-ingress \
    --namespace nginx-ingress \
    --set rbac.create=true \
    --set controller.service.type=NodePort

kubectl --namespace nginx-ingress set image deployment/nginx-ingress-controller \
    nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:dev

kubectl patch --namespace nginx-ingress role nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'

kubectl patch --namespace nginx-ingress role nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

Might be worth noting that the arm:0.20.0 build starts ok. Happy to take this to another issue if you want, had a search around couldn't find any other documentation/issues/etc around this.

Some more hopefully useful output:

matt@Matt-PC:~$ kubectl get role --namespace nginx-ingress -o json
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "rbac.authorization.k8s.io/v1",
            "kind": "Role",
            "metadata": {
                "creationTimestamp": "2019-06-27T23:21:54Z",
                "labels": {
                    "app": "nginx-ingress",
                    "chart": "nginx-ingress-1.7.0",
                    "heritage": "Tiller",
                    "release": "nginx-ingress"
                },
                "name": "nginx-ingress",
                "namespace": "nginx-ingress",
                "resourceVersion": "12102653",
                "selfLink": "/apis/rbac.authorization.k8s.io/v1/namespaces/nginx-ingress/roles/nginx-ingress",
                "uid": "59872e9f-9932-11e9-9d41-b827eb498b75"
            },
            "rules": [
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "namespaces"
                    ],
                    "verbs": [
                        "get"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "configmaps",
                        "pods",
                        "secrets",
                        "endpoints"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "services"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "update",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        "extensions"
                    ],
                    "resources": [
                        "ingresses"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        "extensions"
                    ],
                    "resources": [
                        "ingresses/status"
                    ],
                    "verbs": [
                        "update"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resourceNames": [
                        "ingress-controller-leader-nginx"
                    ],
                    "resources": [
                        "configmaps"
                    ],
                    "verbs": [
                        "get",
                        "update"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "configmaps"
                    ],
                    "verbs": [
                        "create"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "endpoints"
                    ],
                    "verbs": [
                        "create",
                        "get",
                        "update"
                    ]
                },
                {
                    "apiGroups": [
                        ""
                    ],
                    "resources": [
                        "events"
                    ],
                    "verbs": [
                        "create",
                        "patch"
                    ]
                },
                {
                    "apiGroups": [
                        "networking.k8s.io"
                    ],
                    "resources": [
                        "ingresses"
                    ],
                    "verbs": [
                        "get",
                        "list",
                        "watch"
                    ]
                },
                {
                    "apiGroups": [
                        "networking.k8s.io"
                    ],
                    "resources": [
                        "ingresses/status"
                    ],
                    "verbs": [
                        "update"
                    ]
                }
            ]
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}
matt@Matt-PC:~$ kubectl get rolebinding --namespace nginx-ingress -o json                                                                                                                                                                                                                                                                                                  {
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "rbac.authorization.k8s.io/v1",
            "kind": "RoleBinding",
            "metadata": {
                "creationTimestamp": "2019-06-27T23:21:54Z",
                "labels": {
                    "app": "nginx-ingress",
                    "chart": "nginx-ingress-1.7.0",
                    "heritage": "Tiller",
                    "release": "nginx-ingress"
                },
                "name": "nginx-ingress",
                "namespace": "nginx-ingress",
                "resourceVersion": "12102140",
                "selfLink": "/apis/rbac.authorization.k8s.io/v1/namespaces/nginx-ingress/rolebindings/nginx-ingress",
                "uid": "598d49ac-9932-11e9-9d41-b827eb498b75"
            },
            "roleRef": {
                "apiGroup": "rbac.authorization.k8s.io",
                "kind": "Role",
                "name": "nginx-ingress"
            },
            "subjects": [
                {
                    "kind": "ServiceAccount",
                    "name": "nginx-ingress",
                    "namespace": "nginx-ingress"
                }
            ]
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}
matt@Matt-PC:~$ kubectl get clusterrolebinding nginx-ingress -o json                                                                                                                                                                                                                                                                                                       {
    "apiVersion": "rbac.authorization.k8s.io/v1",
    "kind": "ClusterRoleBinding",
    "metadata": {
        "creationTimestamp": "2019-06-27T23:21:54Z",
        "labels": {
            "app": "nginx-ingress",
            "chart": "nginx-ingress-1.7.0",
            "heritage": "Tiller",
            "release": "nginx-ingress"
        },
        "name": "nginx-ingress",
        "resourceVersion": "12102135",
        "selfLink": "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/nginx-ingress",
        "uid": "597a8e30-9932-11e9-9d41-b827eb498b75"
    },
    "roleRef": {
        "apiGroup": "rbac.authorization.k8s.io",
        "kind": "ClusterRole",
        "name": "nginx-ingress"
    },
    "subjects": [
        {
            "kind": "ServiceAccount",
            "name": "nginx-ingress",
            "namespace": "nginx-ingress"
        }
    ]
}
matt@Matt-PC:~$ kubectl get clusterrole nginx-ingress -o json                                                                                                                                                                                                                                                                                                              {
    "apiVersion": "rbac.authorization.k8s.io/v1",
    "kind": "ClusterRole",
    "metadata": {
        "creationTimestamp": "2019-06-27T23:21:53Z",
        "labels": {
            "app": "nginx-ingress",
            "chart": "nginx-ingress-1.7.0",
            "heritage": "Tiller",
            "release": "nginx-ingress"
        },
        "name": "nginx-ingress",
        "resourceVersion": "12102134",
        "selfLink": "/apis/rbac.authorization.k8s.io/v1/clusterroles/nginx-ingress",
        "uid": "596f32c9-9932-11e9-9d41-b827eb498b75"
    },
    "rules": [
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "configmaps",
                "endpoints",
                "nodes",
                "pods",
                "secrets"
            ],
            "verbs": [
                "list",
                "watch"
            ]
        },
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "nodes"
            ],
            "verbs": [
                "get"
            ]
        },
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "services"
            ],
            "verbs": [
                "get",
                "list",
                "update",
                "watch"
            ]
        },
        {
            "apiGroups": [
                "extensions"
            ],
            "resources": [
                "ingresses"
            ],
            "verbs": [
                "get",
                "list",
                "watch"
            ]
        },
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "events"
            ],
            "verbs": [
                "create",
                "patch"
            ]
        },
        {
            "apiGroups": [
                "extensions"
            ],
            "resources": [
                "ingresses/status"
            ],
            "verbs": [
                "update"
            ]
        }
    ]
}

Latest pod logs:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    dev
  Build:      git-5ee82bd08
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
 I0627 23:36:30.715149       7 flags.go:194] Watching for Ingress class: nginx
W0627 23:36:30.715998       7 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0627 23:36:30.730456       7 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0627 23:36:30.731551       7 main.go:196] Creating API client for https://10.96.0.1:443
I0627 23:36:30.920810       7 main.go:240] Running in Kubernetes cluster version v1.14 (v1.14.0) - git (clean) commit 641856db18352033a0d96dbc99153fa3b27298e5 - platform linux/arm
I0627 23:36:30.957370       7 main.go:100] Validated nginx-ingress/nginx-ingress-default-backend as the default backend.
I0627 23:36:37.475757       7 main.go:111] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0627 23:36:37.504099       7 main.go:131] v1.14.0
W0627 23:36:37.570802       7 store.go:624] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I0627 23:36:37.626477       7 nginx.go:280] Starting NGINX Ingress controller
E0627 23:36:38.746353       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:39.755765       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:40.765440       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:41.783611       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:42.794894       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:43.803262       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:44.814760       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:44.995472       7 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 23:36:45.828702       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:46.841740       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:47.856288       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:48.082283       7 checker.go:41] healthcheck error: Get http+unix://nginx-status/healthz: dial unix /tmp/nginx-status-server.sock: connect: no such file or directory
E0627 23:36:48.869435       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:49.880386       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:50.889823       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:51.904185       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope
E0627 23:36:52.916480       7 reflector.go:125] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:180: Failed to list *v1beta1.Ingress: ingresses.networking.k8s.io is forbidden: User "system:serviceaccount:nginx-ingress:nginx-ingress" cannot list resource "ingresses" in API group "networking.k8s.io" at the cluster scope

@aledbf
Copy link
Member Author

aledbf commented Jun 27, 2019

@MattJeanes please also execute

kubectl patch clusterrole nginx-ingress-clusterrole --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'
kubectl patch clusterrole nginx-ingress-clusterrole --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

The previous command only patches the role in the ingress-nginx namespace

@MattJeanes
Copy link

Thank you very much, that's fixed it. Just had to make a tiny tweak to your commands as below. Hopefully this can help others if they run into this problem too.

kubectl patch clusterrole nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses"],"verbs": ["get","list","watch"]}}]'
kubectl patch clusterrole nginx-ingress --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": ["networking.k8s.io"],"resources": ["ingresses/status"],"verbs": ["update"]}}]'

@alexellis
Copy link

This is exciting 🎉

What is the final set of commands to run?

Quick question for @MattJeanes: last time I checked, tiller wasn't available for armhf - are you templating the YAML then applying it, or do you have a working tiller Docker image on your RPi too?

Alex

@MattJeanes
Copy link

@alexellis if you use my commands used from my previous two comments you should get it working. Note this is only for Kubernetes v1.14 and above otherwise you don't need to run all the patch commands. I used https://github.com/jessestuart/tiller-multiarch to install tiller on arm.

@mylesagray
Copy link

I'll note that it was also necessary to change the default-backend image to arm too:

kubectl --namespace nginx-ingress set image deployment/nginx-ingress-default-backend \
    nginx-ingress-default-backend=gcr.io/google_containers/defaultbackend-arm:1.5

@alokhom
Copy link

alokhom commented Jul 29, 2019

@MattJeanes I am facing this "does not have any active Endpoint" .. how do i resolve this ?

ubuntu@master-node:~/charts$ kubectl logs pod/nginx-ingress-controller-6dc598747-kkwss -n arm
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    dev
  Build:      git-c01effb07
  Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------

I0729 00:27:23.185541       6 flags.go:194] Watching for Ingress class: nginx
W0729 00:27:23.185920       6 flags.go:223] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0729 00:27:23.192582       6 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0729 00:27:23.192957       6 main.go:183] Creating API client for https://10.96.0.1:443
I0729 00:27:23.208651       6 main.go:227] Running in Kubernetes cluster version v1.15 (v1.15.1) - git (clean) commit 4485c6f18cee9a5d3c3b4e523bd27972b1b53892 - platform linux/arm64
I0729 00:27:23.213841       6 main.go:91] Validated arm/nginx-ingress-default-backend as the default backend.
I0729 00:27:25.009018       6 main.go:102] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0729 00:27:25.010201       6 main.go:131] v1.15.1
W0729 00:27:25.039315       6 store.go:624] Unexpected error reading configuration configmap: configmaps "nginx-ingress-controller" not found
I0729 00:27:25.059015       6 nginx.go:277] Starting NGINX Ingress controller
I0729 00:27:26.259713       6 leaderelection.go:235] attempting to acquire leader lease  arm/ingress-controller-leader-nginx...
I0729 00:27:26.259697       6 nginx.go:321] Starting NGINX process
W0729 00:27:26.261827       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
I0729 00:27:26.262112       6 controller.go:137] Configuration changes detected, backend reload required.
I0729 00:27:26.271849       6 leaderelection.go:245] successfully acquired lease arm/ingress-controller-leader-nginx
I0729 00:27:26.272342       6 status.go:86] new leader elected: nginx-ingress-controller-6dc598747-kkwss
I0729 00:27:26.373564       6 controller.go:153] Backend successfully reloaded.
I0729 00:27:26.373659       6 controller.go:162] Initial sync, sleeping for 1 second.
[29/Jul/2019:00:27:27 +0000]TCP200000.000
W0729 00:27:30.068659       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
W0729 00:27:38.653588       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
W0729 00:29:49.068996       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
W0729 00:29:52.402469       6 controller.go:388] Service "arm/nginx-ingress-default-backend" does not have any active Endpoint
[29/Jul/2019:00:29:55 +0000]TCP200000.000

@MattJeanes
Copy link

@alokhom did you also follow the command above sent by @mylesagray about changing the default backend? The error seems to be saying that for the default backend service there are no active pods ready, likely caused by them not being the arm versions and failing to start up

@sotiris84
Copy link

Do we have an update on ingress-nginx on arm? I have the same issue where I cannot install ingress-nginx in K3S running on RPi 4 64bit. Thanks

@aledbf
Copy link
Member Author

aledbf commented Jan 7, 2020

@sotiris84 what do you mean?
The images are available:

quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:0.26.2
quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.26.2

Edit: if you are using helm to install the ingress controller please check #4876

@aledbf
Copy link
Member Author

aledbf commented Jan 7, 2020

@sotiris84 I just added a comment in the k3s issue you opened about this issue

@bwolf
Copy link

bwolf commented Jan 12, 2020

@aledbf do I miss something? At least for 0.27.0 the images are not for arm64:

 docker inspect quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.27.0 | grep -i arch
        "Architecture": "amd64",

In addition to that I checked this by extracting the image to check the nginx-ingress-controller binary:

docker create --name "tmp_$$" quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.27.0
docker export tmp_$$ | tar x
file nginx-ingress-controller

nginx-ingress-controller: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=4v160R-gH72p274brcN7/RihjEsXrG1W9cclS9lK6/-UOOTkS-fursCSo1wj7j/AUexVOV2a90YCfCAp8BG, stripped

Besides of that (looking at the filesystem of the container) I see that the image contains header files (opentracing, msgpack, jaegertracing), timezones, terminfo database, many of Lua files. I guess it's size could be further reduced by removing most of that?

@aledbf
Copy link
Member Author

aledbf commented Jan 12, 2020

@bwolf thank you for finding this issue. Please download the images again.

I see that the image contains header files (opentracing, msgpack, jaegertracing), timezones, terminfo database, many of Lua files.

Please no. The ingress controller runs as user, something that avoids the installation of packages. Also, the lua files are required by the ingress controller.

About the size, have you seen we reduce the size to the half in this release?

@aledbf
Copy link
Member Author

aledbf commented Jan 12, 2020

Even with the right compilation, docker returns a different platform

docker run -u root -it quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.27.0 bash -c "apk add file && file /nginx-ingress-controller"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
(1/1) Installing file (5.37-r1)
Executing busybox-1.31.1-r8.trigger
OK: 31 MiB in 42 packages
/nginx-ingress-controller: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, Go BuildID=u0YiO2raCmwDqqCVmEfi/PDlTYe-zkp65POeUplO9/HoJl5D0RY34c3vtQIgk2/78u-G8r1VnJZWQQ2jgoU, stripped

docker inspect quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.27.0 | grep Architecture
        "Architecture": "amd64",

@alexellis
Copy link

Best to use the manifest query command in docker instead. See docker manifest --help https://docs.docker.com/engine/reference/commandline/manifest/

@sotiris84
Copy link

sotiris84 commented Jan 12, 2020

@aledbf

@sotiris84 what do you mean?
The images are available:

quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:0.26.2
quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.26.2

Edit: if you are using helm to install the ingress controller please check #4876

Thank you very much for the reply. That was before I figure out I needed to add the -arm64 prefix in the image. Now all is good thanks. However, it would be nice if it could detect the cpu architecture and choose the correct image.

@sotiris84
Copy link

sotiris84 commented Jan 12, 2020

@aledbf

@sotiris84 I just added a comment in the k3s issue you opened about this issue

Thank you very much. Much appreciated!

@aledbf
Copy link
Member Author

aledbf commented Jan 12, 2020

@sotiris84 the issue here is related to the lack of support of V2 manifests in quay.io. We already have a PR to fix this in the project #4271

@pisymbol
Copy link

Can some kind soul explain to me how to install this? I am following the nginx official instructions but now have to apply the deployment YAML. What image do I give for arm?

@alexellis
Copy link

Yes, try k3sup, "k3sup app install nginx-ingress" on any Kubernetes Cluster. You can also install with helm but https://k3sup.dev automates that including on arm.

@pisymbol
Copy link

@alexellis I can use this on a non k3sup cluster?

@alexellis
Copy link

I did just say that. Read the link please.

@pisymbol
Copy link

I'm sorry. I need to wake up! Again, apologies. I will RTFM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.