Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue using private GCR registry with _json_key #1610

Closed
Dirbaio opened this issue Apr 3, 2020 · 5 comments
Closed

Issue using private GCR registry with _json_key #1610

Dirbaio opened this issue Apr 3, 2020 · 5 comments

Comments

@Dirbaio
Copy link
Contributor

Dirbaio commented Apr 3, 2020

Version:
k3s version v1.17.4+k3s1 (3eee8ac)

K3s arguments:
/usr/local/bin/k3s server --no-deploy traefik --flannel-backend=host-gw --no-deploy servicelb

Describe the bug
I am trying to run pods with images from a private GCR registry. The easiest way to do it is to use a service account key as the password. The issue is that the service account key is a JSON blob, containing quote characters ".

To Reproduce

  • Set this in /etc/rancher/k3s/registries.yaml. (actual key censored)
    configs:
      eu.gcr.io:
        auth:
          username: _json_key
          password: '{  "type": "service_account",  "project_id": "XXXX",  "private_key_id": "XXXX",  ...}'
  • Restart k3s

Expected behavior

It works

Actual behavior

It doesn't work. containerd crashes because /var/lib/rancher/k3s/agent/etc/containerd/config.toml gets generated with the password JSON without quoting:

    [plugins.cri.registry.configs."eu.gcr.io".auth]
    username = "_json_key"
    password = "{  "type": "service_account",  "project_id": "xxx",  "private_key_id": "xxx", ... }"
@nandeepmannava
Copy link

nandeepmannava commented Apr 9, 2020

Hello,

I am also having the same issue. I am trying to set up a private Google Container Registry (GCP) with _Json_key. I've had a similar registries.yaml file and Restarted the cluster using systemctl restart k3s.

After that I've done crictl pull gcr.io/project/image:latest. I've got the error

failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded

And also when I do kubectl cluster-info I am also getting below error after restarting the service

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Please help me on this..

@ngerstle-cognite
Copy link

This is a blocker on my end as well, though I'm utilizing k3d with images pulled from a private gcr as well- I haven't had luck using alternative mechanisms.

@nandeepmannava
Copy link

I am having the same issue. unable to resolve. I am struggling from the very long time.

@Stono
Copy link

Stono commented Oct 8, 2020

We still get this issue too; the json key is being incorrectly encoded with "{" rather than '{" so we can't use gcr.

We've tried v1.16 and latest

@brandond
Copy link
Member

brandond commented Oct 8, 2020

Have you tried modifying the containderd config.toml template? The documentation is available here: https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd

You might try modifying the line at https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L61 to read

{{ if $v.Auth.Password }}password = '{{ printf "%s" $v.Auth.Password }}'{{end}}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants