Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm v3 helmfile apply issue #877

Closed
zen4ever opened this issue Sep 27, 2019 · 17 comments
Closed

Helm v3 helmfile apply issue #877

zen4ever opened this issue Sep 27, 2019 · 17 comments
Labels

Comments

@zen4ever
Copy link

running helmfile apply results in an error when using helm v3. I have HELMFILE_HELM3=1 set.
My helm version is v3.0.0-beta.3

Comparing release=ingx-to-sd, chart=stable/prometheus-to-sd
Comparing release=ingx, chart=stable/nginx-ingress
in ./helmfile.yaml: in .helmfiles[0]: in ../../../helmfiles/nginx-ingress/helmfile.yaml: 2 errors:
err 0: failed processing release ingx: helm exited with status 1:
  Error: unknown flag: --reset-values
err 1: failed processing release ingx-to-sd: helm exited with status 1:
  Error: unknown flag: --reset-values
@mumoshu
Copy link
Collaborator

mumoshu commented Sep 27, 2019

@zen4ever Hey! Thanks for reporting, but I'm unable to reproduce this.

My setup is:

Would you mind sharing more details on your setup as well, and compare it with mine?

@zen4ever
Copy link
Author

@mumoshu do I need to have helm-diff installed for apply to work?

@zen4ever
Copy link
Author

My helm is version.BuildInfo{Version:"v3.0.0-beta.3", GitCommit:"5cb923eecbe80d1ad76399aee234717c11931d9a", GitTreeState:"clean", GoVersion:"go1.12.9"} helmfile version v0.85.2
My plugins

gcs 	0.3.0  	Manage repositories on Google Cloud Storage

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 28, 2019

do I need to have helm-diff installed for apply to work?

Yep, you definitely need it because helmfile apply uses helm diff to determine if it needs to run upgrade --install in the first place.

Also, 3.0.0-beta.3 won't work as it misses helm/helm#6385 that is required for helm-diff.

Probably running helmfile with --log-level=debug gives you more context that helps to diagnose the cause!

@zen4ever
Copy link
Author

Currently running 3.0.0-beta.5 and helm-diff built from master with the PR mentioned above merged. helmfile apply is exiting with error because there are changes. My understanding is that it should actually run helmfile sync if there are changes discovered by helm-diff.

$helmfile --log-level=debug apply
processing file "helmfile.yaml" in directory "."
first-pass rendering starting for "helmfile.yaml.part.0": inherited=&{default map[] map[]}, overrode=<nil>
first-pass uses: &{default map[] map[]}
first-pass produced: &{default map[] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[] map[]}
vals:
map[]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
 0: helmfiles:
 1:   - path: "../../../helmfiles/nginx-ingress/helmfile.yaml"
 2:     values:
 3:       - nginx_ingress_ip.yaml
 4:       - nginxIngressRelease: ingx
 5:

merged environment: &{default map[] map[]}
processing file "helmfile.yaml" in directory "../../../helmfiles/nginx-ingress"
changing working directory to "infrastructure-v2/gke/helmfiles/nginx-ingress"
envvals_loader: loaded nginx_ingress_ip.yaml:map[nginxIngressIP:35.223.250.53]
first-pass rendering starting for "helmfile.yaml.part.0": inherited=<nil>, overrode=&{default map[nginxIngressIP:35.223.250.53 nginxIngressRelease:ingx] map[]}
first-pass uses: &{default map[nginxIngressIP:35.223.250.53 nginxIngressRelease:ingx] map[]}
replaced <no value>s to workaround https://github.com/golang/go/issues/24963 to address https://github.com/roboll/helmfile/issues/553:
  strings.Join({
  	"releases:",
- 	`  - name: "<no value>"`,
+ 	`  - name: ""`,
  	"    chart: stable/nginx-ingress",
  	`    version: "~1.24"`,
  	... // 2 identical lines
  	"      - controller:",
  	"          service:",
- 	`            loadBalancerIP: "<no value>"`,
- 	`  - name: "<no value>-to-sd"`,
+ 	`            loadBalancerIP: ""`,
+ 	`  - name: "-to-sd"`,
  	"    chart: stable/prometheus-to-sd",
  	"    values:",
  	"      - metricsSources:",
- 	`          nginx-ingress-metrics: "http://<no value>-nginx-ingress-controller-metrics:9913"`,
+ 	`          nginx-ingress-metrics: "http://-nginx-ingress-controller-metrics:9913"`,
  	"    atomic: true",
  	"",
  }, "\n")

first-pass produced: &{default map[nginxIngressIP:35.223.250.53 nginxIngressRelease:ingx] map[]}
first-pass rendering result of "helmfile.yaml.part.0": {default map[nginxIngressIP:35.223.250.53 nginxIngressRelease:ingx] map[]}
vals:
map[nginxIngressIP:35.223.250.53 nginxIngressRelease:ingx]
defaultVals:[]
second-pass rendering result of "helmfile.yaml.part.0":
 0: releases:
 1:   - name: "ingx"
 2:     chart: stable/nginx-ingress
 3:     version: "~1.24"
 4:     values:
 5:       - ./nginx-ingress-values.yaml
 6:       - controller:
 7:           service:
 8:             loadBalancerIP: "35.223.250.53"
 9:   - name: "ingx-to-sd"
10:     chart: stable/prometheus-to-sd
11:     values:
12:       - metricsSources:
13:           nginx-ingress-metrics: "http://ingx-nginx-ingress-controller-metrics:9913"
14:     atomic: true
15:

merged environment: &{default map[nginxIngressIP:35.223.250.53 nginxIngressRelease:ingx] map[]}
worker 2/2 started
worker 1/2 started
successfully generated the value file at nginx-ingress-values.yaml. produced:
rbac:
  create: true
controller:
  service:
    externalTrafficPolicy: Local
    omitClusterIP: true
  metrics:
    enabled: true
    service:
      omitClusterIP: true
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 100
    targetCPUUtilizationPercentage: "70"
    targetMemoryUtilizationPercentage: "70"
  config:
    error-log-level: "info"
    keep-alive: "620"
    keep-alive-requests: "2000"
    client-header-buffer-size: "64k"
    large-client-header-buffers: "4 64k"
    proxy-connect-timeout: "300"
    proxy-set-headers: ""
    http-snippet: |
      lingering_close always;
      send_timeout 450s;
    server-snippet: "set_by_lua $request_headers '\nlocal h = ngx.req.get_headers()\nlocal request_headers_all = \"\"\nfor k, v in pairs(h) do\nrequest_headers_all = request_headers_all .. \"\"..k..\": \"..v..\";\"\nend\nreturn request_headers_all\n';"
    location-snippet: ""
    proxy-read-timeout: "300"
    proxy-send-timeout: "300"
    upstream-keepalive-timeout: "900"
    upstream-keepalive-requests: "200"
    client-body-timeout: "300"
    proxy-body-size: "100m"
    proxy-buffers-number: "4"
    proxy-buffer-size: "128k"
    proxy-buffering: "off"
    http2-max-field-size: "16k"
    http2-max-header-size: "128k"
    enable-underscores-in-headers: "true"
    log-format-escape-json: "true"
    log-format-upstream: '{"proxyUpstreamName": "$proxy_upstream_name", "proxyUpstreamAddr": "$upstream_addr", "requestMethod": "$request_method", "requestUrl": "$scheme://$http_host$request_uri", "status": $status, "requestSize": "$request_length", "responseSize": "$upstream_response_length", "userAgent": "$http_user_agent", "remoteIp": "$remote_addr", "referer": "$http_referer", "latency": "$upstream_response_time", "request_time": "$request_time", "request_id": "$req_id", "request": "$request", "ssl_protocol": "$ssl_protocol", "ssl_cipher": "$ssl_cipher", "request_headers": "$request_headers"}'
defaultBackend:
  service:
    omitClusterIP: true
    atomic: true

worker 2/2 finished
worker 1/2 finished
worker 2/2 started
worker 1/2 started
Comparing release=ingx-to-sd, chart=stable/prometheus-to-sd
Comparing release=ingx, chart=stable/nginx-ingress
exec: helm diff upgrade --reset-values --allow-unreleased ingx stable/nginx-ingress --version ~1.24 --values /var/folders/4v/b2yzq_0d6qs0ddvh6m6prd4c0000gq/T/values086405609 --values /var/folders/4v/b2yzq_0d6qs0ddvh6m6prd4c0000gq/T/values035351092 --detailed-exitcode
exec: helm diff upgrade --reset-values --allow-unreleased ingx-to-sd stable/prometheus-to-sd --values /var/folders/4v/b2yzq_0d6qs0ddvh6m6prd4c0000gq/T/values486428163 --detailed-exitcode
exec: helm diff upgrade --reset-values --allow-unreleased ingx-to-sd stable/prometheus-to-sd --values /var/folders/4v/b2yzq_0d6qs0ddvh6m6prd4c0000gq/T/values486428163 --detailed-exitcode:
worker 1/2 finished
exec: helm diff upgrade --reset-values --allow-unreleased ingx stable/nginx-ingress --version ~1.24 --values /var/folders/4v/b2yzq_0d6qs0ddvh6m6prd4c0000gq/T/values086405609 --values /var/folders/4v/b2yzq_0d6qs0ddvh6m6prd4c0000gq/T/values035351092 --detailed-exitcode: default, ingx-nginx-ingress-default-backend, Service (v1) has changed:
  # Source: nginx-ingress/templates/default-backend-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.2
      component: "default-backend"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-default-backend
  spec:
+   clusterIP: ""
    ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
    selector:
      app: nginx-ingress
      component: "default-backend"
      release: ingx
    type: "ClusterIP"
default, ingx-nginx-ingress-controller, Deployment (apps) has changed:
  # Source: nginx-ingress/templates/controller-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.2
      component: "controller"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-controller
  spec:
    selector:
      matchLabels:
        app: nginx-ingress
        release: ingx
    replicas: 1
    revisionHistoryLimit: 10
    strategy:
      {}
    minReadySeconds: 0
    template:
      metadata:
        labels:
          app: nginx-ingress
          component: "controller"
          release: ingx
      spec:
        dnsPolicy: ClusterFirst
        containers:
          - name: nginx-ingress-controller
            image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1"
            imagePullPolicy: "IfNotPresent"
            args:
              - /nginx-ingress-controller
              - --default-backend-service=default/ingx-nginx-ingress-default-backend
              - --election-id=ingress-controller-leader
              - --ingress-class=nginx
              - --configmap=default/ingx-nginx-ingress-controller
            securityContext:
              capabilities:
                  drop:
                  - ALL
                  add:
                  - NET_BIND_SERVICE
              runAsUser: 33
              allowPrivilegeEscalation: true
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            livenessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 1
              successThreshold: 1
              failureThreshold: 3
            ports:
              - name: http
                containerPort: 80
                protocol: TCP
              - name: https
                containerPort: 443
                protocol: TCP
-             - name: metrics
-               containerPort: 10254
-               protocol: TCP
            readinessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 1
              successThreshold: 1
              failureThreshold: 3
            resources:
              {}
        hostNetwork: false
        serviceAccountName: ingx-nginx-ingress
        terminationGracePeriodSeconds: 60
default, ingx-nginx-ingress-controller, HorizontalPodAutoscaler (autoscaling) has been removed:
- # Source: nginx-ingress/templates/controller-hpa.yaml
- apiVersion: autoscaling/v2beta1
- kind: HorizontalPodAutoscaler
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.2
-     component: "controller"
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress-controller
- spec:
-   scaleTargetRef:
-     apiVersion: apps/v1
-     kind: Deployment
-     name: ingx-nginx-ingress-controller
-   minReplicas: 2
-   maxReplicas: 100
-   metrics:
-     - type: Resource
-       resource:
-         name: cpu
-         targetAverageUtilization: 70
-     - type: Resource
-       resource:
-         name: memory
-         targetAverageUtilization: 70
+
default, ingx-nginx-ingress-controller-metrics, Service (v1) has been removed:
- # Source: nginx-ingress/templates/controller-metrics-service.yaml
- apiVersion: v1
- kind: Service
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.2
-     component: "controller"
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress-controller-metrics
- spec:
-   ports:
-     - name: metrics
-       port: 9913
-       targetPort: metrics
-   selector:
-     app: nginx-ingress
-     component: "controller"
-     release: ingx
-   type: "ClusterIP"
+
default, ingx-nginx-ingress-controller, Service (v1) has changed:
  # Source: nginx-ingress/templates/controller-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.2
      component: "controller"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-controller
  spec:
-   loadBalancerIP: "35.223.250.53"
-   externalTrafficPolicy: "Local"
+   clusterIP: ""
    ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
    selector:
      app: nginx-ingress
      component: "controller"
      release: ingx
    type: "LoadBalancer"
default, ingx-nginx-ingress, ServiceAccount (v1) has been removed:
- # Source: nginx-ingress/templates/controller-serviceaccount.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.2
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress
+
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)

worker 2/2 finished
err: release "ingx" in "helmfile.yaml" failed: failed processing release ingx: helm exited with status 1:
  Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
  Error: plugin "diff" exited with error
changing working directory back to "infrastructure-v2/gke/infrastructure/devops-tools-dev/devops-tools-dev"
in ./helmfile.yaml: in .helmfiles[0]: in ../../../helmfiles/nginx-ingress/helmfile.yaml: failed processing release ingx: helm exited with status 1:
  Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
  Error: plugin "diff" exited with error

@zen4ever
Copy link
Author

Another issue I have with how currently helmfiles handles the diff with helm v3, is that there are actually no changes in this chart. When I'm running helm diff upgrade ingx stable/nginx-ingress -f ../../../helmfiles/nginx-ingress/nginx-ingress-values.yaml it returns not changes. But probably becase --reset-values flag is added by helmfile, helmfile diff reports changes

@mumoshu
Copy link
Collaborator

mumoshu commented Oct 24, 2019

@zen4ever Hey! Thanks for testing it out and the detailed feedback.

My understanding is that it should actually run helmfile sync if there are changes discovered by helm-diff.

You're absolutely correct. In this case helm-diff should return 2 via helm v3 so that helmfile could proceed to apply changes. Maybe the latest helm-diff is returning an incorrect status, or helm 3.0.0-beta.5 is swallowing the status returned by helm-diff? But for the latter, I thought I've fixed it on helm side: helm/helm#6384

When I'm running helm diff upgrade ingx stable/nginx-ingress -f ../../../helmfiles/nginx-ingress/nginx-ingress-values.yaml it returns not changes. But probably becase --reset-values flag is added by helmfile, helmfile diff reports changes

Could you please let me confirm a bit - so is it that helmfile diff reports changes even though helm-diff doesn't report any change?

@zen4ever
Copy link
Author

Answering the second question when I run helm diff upgrade ingx stable/nginx-ingress -f ../../../helmfiles/nginx-ingress/nginx-ingress-values.yaml --set controller.service.loadBalancerIP=35.223.250.53 it returns nothing. however when I add --reset-values and --allow-unreleased, it returns me following

- apiVersion: v1
- kind: Service
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.4
-     component: "controller"
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress-controller-metrics
- spec:
-   ports:
-     - name: metrics
-       port: 9913
-       targetPort: metrics
-   selector:
-     app: nginx-ingress
-     component: "controller"
-     release: ingx
-   type: "ClusterIP"
+
default, ingx-nginx-ingress, ServiceAccount (v1) has been removed:
- # Source: nginx-ingress/templates/controller-serviceaccount.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.4
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress
+
default, ingx-nginx-ingress-controller, Deployment (apps) has changed:
  # Source: nginx-ingress/templates/controller-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.4
      component: "controller"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-controller
  spec:
    selector:
      matchLabels:
        app: nginx-ingress
        release: ingx
    replicas: 1
    revisionHistoryLimit: 10
    strategy:
      {}
    minReadySeconds: 0
    template:
      metadata:
        labels:
          app: nginx-ingress
          component: "controller"
          release: ingx
      spec:
        dnsPolicy: ClusterFirst
        containers:
          - name: nginx-ingress-controller
            image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1"
            imagePullPolicy: "IfNotPresent"
            args:
              - /nginx-ingress-controller
              - --default-backend-service=default/ingx-nginx-ingress-default-backend
              - --election-id=ingress-controller-leader
              - --ingress-class=nginx
              - --configmap=default/ingx-nginx-ingress-controller
            securityContext:
              capabilities:
                  drop:
                  - ALL
                  add:
                  - NET_BIND_SERVICE
              runAsUser: 33
              allowPrivilegeEscalation: true
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            livenessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 1
              successThreshold: 1
              failureThreshold: 3
            ports:
              - name: http
                containerPort: 80
                protocol: TCP
              - name: https
                containerPort: 443
                protocol: TCP
-             - name: metrics
-               containerPort: 10254
-               protocol: TCP
            readinessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 1
              successThreshold: 1
              failureThreshold: 3
            resources:
              {}
        hostNetwork: false
        serviceAccountName: ingx-nginx-ingress
        terminationGracePeriodSeconds: 60
default, ingx-nginx-ingress-controller, HorizontalPodAutoscaler (autoscaling) has been removed:
- # Source: nginx-ingress/templates/controller-hpa.yaml
- apiVersion: autoscaling/v2beta1
- kind: HorizontalPodAutoscaler
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.4
-     component: "controller"
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress-controller
- spec:
-   scaleTargetRef:
-     apiVersion:
-     kind: Deployment
-     name: ingx-nginx-ingress-controller
-   minReplicas: 2
-   maxReplicas: 100
-   metrics:
-     - type: Resource
-       resource:
-         name: cpu
-         targetAverageUtilization: 70
-     - type: Resource
-       resource:
-         name: memory
-         targetAverageUtilization: 70
+
default, ingx-nginx-ingress-controller, Service (v1) has changed:
  # Source: nginx-ingress/templates/controller-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.4
      component: "controller"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-controller
  spec:
-   loadBalancerIP: "35.223.250.53"
-   externalTrafficPolicy: "Local"
+   clusterIP: ""
    ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
    selector:
      app: nginx-ingress
      component: "controller"
      release: ingx
    type: "LoadBalancer"
default, ingx-nginx-ingress-default-backend, Service (v1) has changed:
  # Source: nginx-ingress/templates/default-backend-service.yaml
  apiVersion: v1

This is the same output I get from helmfile diff. I think there could be some issue with helm-diff plugin, since it only recently started to support helm v3. Seems like --reset-values somehow prevents values to be passed to the chart.

This is the behavior I'm getting when I'm excluding any values

helm diff upgrade --reset-values ingx stable/nginx-ingress

default, ingx-nginx-ingress-controller, Deployment (apps) has changed:
  # Source: nginx-ingress/templates/controller-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.4
      component: "controller"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-controller
  spec:
    selector:
      matchLabels:
        app: nginx-ingress
        release: ingx
    replicas: 1
    revisionHistoryLimit: 10
    strategy:
      {}
    minReadySeconds: 0
    template:
      metadata:
        labels:
          app: nginx-ingress
          component: "controller"
          release: ingx
      spec:
        dnsPolicy: ClusterFirst
        containers:
          - name: nginx-ingress-controller
            image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1"
            imagePullPolicy: "IfNotPresent"
            args:
              - /nginx-ingress-controller
              - --default-backend-service=default/ingx-nginx-ingress-default-backend
              - --election-id=ingress-controller-leader
              - --ingress-class=nginx
              - --configmap=default/ingx-nginx-ingress-controller
            securityContext:
              capabilities:
                  drop:
                  - ALL
                  add:
                  - NET_BIND_SERVICE
              runAsUser: 33
              allowPrivilegeEscalation: true
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            livenessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 1
              successThreshold: 1
              failureThreshold: 3
            ports:
              - name: http
                containerPort: 80
                protocol: TCP
              - name: https
                containerPort: 443
                protocol: TCP
-             - name: metrics
-               containerPort: 10254
-               protocol: TCP
            readinessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              timeoutSeconds: 1
              successThreshold: 1
              failureThreshold: 3
            resources:
              {}
        hostNetwork: false
        serviceAccountName: ingx-nginx-ingress
        terminationGracePeriodSeconds: 60
default, ingx-nginx-ingress-default-backend, Service (v1) has changed:
  # Source: nginx-ingress/templates/default-backend-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.4
      component: "default-backend"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-default-backend
  spec:
+   clusterIP: ""
    ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
    selector:
      app: nginx-ingress
      component: "default-backend"
      release: ingx
    type: "ClusterIP"
default, ingx-nginx-ingress, ServiceAccount (v1) has been removed:
- # Source: nginx-ingress/templates/controller-serviceaccount.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.4
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress
+
default, ingx-nginx-ingress-controller-metrics, Service (v1) has been removed:
- # Source: nginx-ingress/templates/controller-metrics-service.yaml
- apiVersion: v1
- kind: Service
- metadata:
-   labels:
-     app: nginx-ingress
-     chart: nginx-ingress-1.24.4
-     component: "controller"
-     heritage: Helm
-     release: ingx
-   name: ingx-nginx-ingress-controller-metrics
- spec:
-   ports:
-     - name: metrics
-       port: 9913
-       targetPort: metrics
-   selector:
-     app: nginx-ingress
-     component: "controller"
-     release: ingx
-   type: "ClusterIP"
+
default, ingx-nginx-ingress-controller, Service (v1) has changed:
  # Source: nginx-ingress/templates/controller-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: nginx-ingress
      chart: nginx-ingress-1.24.4
      component: "controller"
      heritage: Helm
      release: ingx
    name: ingx-nginx-ingress-controller
  spec:
-   loadBalancerIP: "35.223.250.53"
-   externalTrafficPolicy: "Local"
+   clusterIP: ""
    ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: http
      - name: https
        port: 443
        protocol: TCP
        targetPort: https

@zen4ever
Copy link
Author

Regarding exit code, I'm testing it, and helm-diff is returning exit code 1 instead of 2, I'm currently using 3.0.0-rc.2 of helm-diff

@zen4ever
Copy link
Author

It appears that helm-diff returns 2, at least both error messages in upgrade command, return 2 https://github.com/databus23/helm-diff/blob/master/cmd/upgrade.go So I think the issue is on the helm side, but I'm not that familiar with the codebase.

@mumoshu
Copy link
Collaborator

mumoshu commented Oct 25, 2019

@zen4ever Thanks! Definitely something seems to be still broken in helm v3...

$ helm3b5 diff upgrade foo stable/envoy --set image.tag=foo --detailed-exitcode
...
Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
Error: plugin "diff" exited with error

$ echo $?
1
$ HELM_BIN=~/bin/helm3b5 ~/Library/helm/plugins/helm-diff/bin/diff upgrade foo stable/envoy --set image.tag=foo --detailed-exitcode

$ echo $?
2

@mumoshu
Copy link
Collaborator

mumoshu commented Oct 25, 2019

Okay so this might be a regression introduced via https://github.com/helm/helm/pull/6341/files#diff-1dcf80a24ef0460c3dab1363f093d31bR69

@mumoshu
Copy link
Collaborator

mumoshu commented Oct 25, 2019

Opened helm/helm#6788

@mumoshu
Copy link
Collaborator

mumoshu commented Oct 28, 2019

Update: The upstream fix has already been submitted at helm/helm#6790

@thomastaylor312 Thanks as always for your efforts!

Would you mind taking a look into the pull request submitted to helm: helm/helm#6790?

Here we're suffering from a regression happened between helm 3.0.0-beta.4 and 5.

@mumoshu
Copy link
Collaborator

mumoshu commented Oct 29, 2019

Thanks to super responsive helm maintainers, helm/helm#6790 has been already merged.

It will mostly work with helm v3 built from the current master and the latest rc of helm-diff.

The remaining TODO is databus23/helm-diff#158.

@zen4ever
Copy link
Author

Thanks for the update @mumoshu , yes, I've discovered that limitation of helm template as well, when I was trying to create Vertical Autoscaler resource, it was complaining about invalid API versions.

@zen4ever
Copy link
Author

I've just tested with helm v3.0.0-rc.1 and helm-diff 3.0.0-rc.6 and everything works as expected, thanks a lot @mumoshu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants