Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Azure] Using Service Principal credentials to pull private image in ACR #192

Closed
jmshal opened this issue May 9, 2018 · 4 comments
Closed

Comments

@jmshal
Copy link

jmshal commented May 9, 2018


Environment summary

Provider (e.g. ACI, AWS Fargate, Hyper)

ACI

Version (e.g. 0.1, 0.2-beta)

microsoft/virtual-kubelet:latest (aka. 0.2-beta-12)

K8s Master Info (e.g. AKS, ACS, Bare Metal, EKS)

AKS

Install Method (e.g. Helm Chart)

az aks install-connector [...]

Issue Details

When specifying a private image hosted on an ACR (Azure Container Registry), the Azure AD Service Principal authentication credentials are not passed to the ACI. Private images work totally fine if the pods are run on the AKS nodes, but do not behave the same way using virtual-kubelet - I naively expect the same result regardless of where the underlying pod is running.

As a workaround I have created a secret (of type kubernetes.io/dockerconfigjson) which contains the Service Credentials details needed to pull the private image from ACR (username/appId and password). And I point the spec at the secret (via imagePullSecrets). As I took a look through the provider code and noticed that it supported this feature (see below).

func (p *ACIProvider) getImagePullSecrets(pod *v1.Pod) ([]aci.ImageRegistryCredential, error) {
ips := make([]aci.ImageRegistryCredential, 0, len(pod.Spec.ImagePullSecrets))
for _, ref := range pod.Spec.ImagePullSecrets {
secret, err := p.resourceManager.GetSecret(ref.Name, pod.Namespace)
if err != nil {
return ips, err
}
if secret == nil {
return nil, fmt.Errorf("error getting image pull secret")
}
// TODO: Check if secret type is v1.SecretTypeDockercfg and use DockerConfigKey instead of hardcoded value
// TODO: Check if secret type is v1.SecretTypeDockerConfigJson and use DockerConfigJsonKey to determine if it's in json format
// TODO: Return error if it's not one of these two types
switch secret.Type {
case v1.SecretTypeDockercfg:
ips, err = readDockerCfgSecret(secret, ips)
case v1.SecretTypeDockerConfigJson:
ips, err = readDockerConfigJSONSecret(secret, ips)
default:
return nil, fmt.Errorf("image pull secret type is not one of kubernetes.io/dockercfg or kubernetes.io/dockerconfigjson")
}
if err != nil {
return ips, err
}
}
return ips, nil
}

I could not find any other references that would suggest these credentials could be automatically passed to ACI. But for the time being, my workaround functions as expected - I'm not just entirely sure if it's the best way to do it.

I also noticed that when specifying multiple auths (eg. https://example.azurecr.io and example.azurecr.io) in the docker config json secret, ACI also throws an error. So this workaround requires a secret per container registry (which is probably a good idea anyway) - if you're pulling from multiple ACRs. I'm not, but it's worth noting, because the code below suggests that you could provide multiple auths, yet I'm not sure that could ever work.

for server, authConfig := range auths {
ips = append(ips, aci.ImageRegistryCredential{
Password: authConfig.Password,
Server: server,
Username: authConfig.Username,
})
}

I hope that's enough info.

Repo Steps

  • Create an AD SP
  • Create an ACR (and assign the Reader role to the SP)
  • Push an image to the ACR
  • Create an AKS cluster (using the SP)
  • Install the ACI connector
  • Create a deployment/pod which targets the ACI, and which uses the ACR image name (eg. example.azurecr.io/nginx:latest).
  • Check the ACI resource group's activity log for the error message (or check the ACI connector pod's logs)

Example JSON output from the resource group's activity log:

{
    ...
    "properties": {
        "statusCode": "BadRequest",
        "serviceRequestId": "eastus:36880b71-f747-4192-81d0-1002aa458943",
        "statusMessage": "{\"error\":{\"code\":\"InaccessibleImage\",\"message\":\"The image '[REDACTED].azurecr.io/[REDACTED]:latest' in container group 'default-[REDACTED]-7fb697cdbc-rmqpw' is not accessible. Please check the image and registry credential.\"}}"
    },
    ...
}
@rbitia
Copy link
Contributor

rbitia commented May 9, 2018

Thank you for all the detail! The AD service principle creds do not get passed down to ACI so your workaround to create a secret is the recommended way to pull from ACR. I agree you shouldn't need to think about where your pod is running ACI vs. in cluster so in the future we will figure out how to enable that workflow.

@jmshal
Copy link
Author

jmshal commented May 9, 2018

Thanks @rbitia! It's great to have that clarification.

@CharlesCara
Copy link

Could I make a request that you add a note to the existing virtual kubelet documentation about needing to add a kubernetes secret to the pod spec? Would have saved me a fair amount of time today.

@lachie83
Copy link
Contributor

I believe this issue has been addressed. Closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants