-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Take care of manifest config blob #10759
Conversation
@liggitt can you help review this? |
8d7be51
to
626b136
Compare
The pullthrough part appears to be working now. Verifying a pull of local image of schema 2 tagged to different namespace now. |
626b136
to
dc82904
Compare
And the second part (pull of local image of schema 2 tagged to different namespace) seems to be working as well. Writing tests now. |
func (r *pullthroughBlobStore) Get(ctx context.Context, dgst digest.Digest) ([]byte, error) { | ||
store, ok := r.digestToStore[dgst.String()] | ||
if !ok { | ||
data, getErr := r.BlobStore.Get(ctx, dgst) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can just use err here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can just use err here
OK, I'll rename.
this need tests (but I think you're already working on them) |
@@ -130,6 +136,28 @@ func (r *pullthroughBlobStore) ServeBlob(ctx context.Context, w http.ResponseWri | |||
return nil | |||
} | |||
|
|||
// Get attempts to fetch the requested blob by digest using a remote proxy store if necessary. | |||
func (r *pullthroughBlobStore) Get(ctx context.Context, dgst digest.Digest) ([]byte, error) { | |||
store, ok := r.digestToStore[dgst.String()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return early in the ok case and unindent the rest of the function?
if store, ok := r.digestToStore[dgst.String()]; ok {
return store.Get(ctx, desc.Digest)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pullthroughBlobStore instances are per-request, right? just making sure we don't have to worry about locking/races on the digestToStore map, or about long-term growth/retention
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return early in the ok case and unindent the rest of the function?
good suggestion, I'll rewrite
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pullthroughBlobStore instances are per-request, right?
That's right.
dc82904
to
e22f4b2
Compare
@@ -553,6 +557,10 @@ func (r *repository) rememberLayersOfImage(image *imageapi.Image, cacheName stri | |||
for _, layer := range image.DockerImageLayers { | |||
r.cachedLayers.RememberDigest(digest.Digest(layer.Name), r.blobrepositorycachettl, cacheName) | |||
} | |||
// remember reference to manifest config as well for schema 2 | |||
if image.DockerImageManifestMediaType == schema2.MediaTypeManifest && len(image.DockerImageMetadata.ID) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we only do this if len(image.DockerImageConfig) > 0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we only do this if len(image.DockerImageConfig) > 0?
To make it comply with the other conditions, I'll use this instead. image.DockerImageConfig
is filled only for manifest schema 2.
e22f4b2
to
5c2786a
Compare
The changes LGTM, but tests is something I'd like to see before getting this merged. |
Make sure to match image having config name equal to the wanted blob digest. Remember the image and allow access to its layers. Signed-off-by: Michal Minář <[email protected]>
Manifest configs are fetched using Get() method from blob store. Pullthrough middleware needs to override it as well to allow for pulling manifest v2 schema 2 images from remote repositories. Signed-off-by: Michal Minář <[email protected]>
5c2786a
to
f8e5a5a
Compare
I've got the e2e tests working finally. |
d5dd631
to
43f6f95
Compare
Flake #9624, extended conformance flake:
And
In https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/8662/. |
exit 1 | ||
fi | ||
local podnamejs='{range .subsets[*]}{range .addresses[*]}{.targetRef.name},{end}{end}' | ||
wait_for_command "oc get endpoints/docker-registry -o 'jsonpath=$podnamejs' --config='${ADMIN_KUBECONFIG}' | egrep -q '(^|,)${registrypod},'" $TIME_MIN |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this necessary?
Not sure, I was just reluctant to remove the existing guards.
In my local test, this passes in no time:
[INFO] Success running command: 'oc get rc/docker-registry-1 --template "{{ index .metadata.annotations \"openshift.io/deployment.phase\" }}" --config='/tmp/openshift/test-end-to-end//openshift.local.config/master/admin.kubeconfig' | grep Complete' after 23 seconds
[INFO] Waiting for command to finish: 'oc get endpoints/docker-registry -o 'jsonpath={range .subsets[*]}{range .addresses[*]}{.targetRef.name},{end}{end}' --config='/tmp/openshift/test-end-to-end//openshift.local.config/master/admin.kubeconfig' | egrep -q '(^|,)docker-registry-1-6mc18,''...
[INFO] Success running command: 'oc get endpoints/docker-registry -o 'jsonpath={range .subsets[*]}{range .addresses[*]}{.targetRef.name},{end}{end}' --config='/tmp/openshift/test-end-to-end//openshift.local.config/master/admin.kubeconfig' | egrep -q '(^|,)docker-registry-1-6mc18,'' after 0 seconds
[INFO] Waiting for command to finish: 'oc get 'pod/docker-registry-1-6mc18' -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' --config='/tmp/openshift/test-end-to-end//openshift.local.config/master/admin.kubeconfig' | grep -qi true'...
[INFO] Success running command: 'oc get 'pod/docker-registry-1-6mc18' -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' --config='/tmp/openshift/test-end-to-end//openshift.local.config/master/admin.kubeconfig' | grep -qi true' after 0 seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant all the additions to wait_for_registry
, not just that line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant all the additions to wait_for_registry, not just that line
That's because I'm redeploying the registry. Until now, the wait_for_registry
expected just rc version 1. Now it needs to deal with multiple versions of rcs, pods, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is fine to make sure that we are testing against the latest version of registry (in case the registry is re-deployed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a log from one of the recent runs:
[INFO] Waiting for command to finish: 'oc get rc/docker-registry-2 --template "{{ index .metadata.annotations \"openshift.io/deployment.phase\" }}" --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | grep Complete'...
Complete
[INFO] Success running command: 'oc get rc/docker-registry-2 --template "{{ index .metadata.annotations \"openshift.io/deployment.phase\" }}" --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | grep Complete' after 16 seconds
[INFO] Waiting for command to finish: 'oc get endpoints/docker-registry -o 'jsonpath={range .subsets[*]}{range .addresses[*]}{.targetRef.name},{end}{end}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | egrep -q '(^|,)docker-registry-2-n8o46,''...
[INFO] Success running command: 'oc get endpoints/docker-registry -o 'jsonpath={range .subsets[*]}{range .addresses[*]}{.targetRef.name},{end}{end}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | egrep -q '(^|,)docker-registry-2-n8o46,'' after 1 seconds
[INFO] Waiting for command to finish: 'oc get pod -l deploymentconfig=docker-registry -o jsonpath='{.items[*].status.conditions[?(@.type=="Ready")].status}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | grep -qi true'...
[INFO] Success running command: 'oc get pod -l deploymentconfig=docker-registry -o jsonpath='{.items[*].status.conditions[?(@.type=="Ready")].status}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | grep -qi true' after 0 seconds
The oc get endpoint
needed additional 1 second to succeed after the deployment of dc-2 completed which IMHO justifies first two wait statements. I'm still not sure about the third (wait for a pod to become ready). I'd prefer to keep it though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And, finally, here's a justification for the 3rd wait:
[INFO] Success running command: 'oc get endpoints/docker-registry -o 'jsonpath={range .subsets[*]}{range .addresses[*]}{.targetRef.name},{end}{end}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | egrep -q '(^|,)docker-registry-1-mwn9s,'' after 0 seconds
[INFO] Waiting for command to finish: 'oc get 'pod/docker-registry-1-mwn9s' -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | grep -qi true'...
[INFO] Success running command: 'oc get 'pod/docker-registry-1-mwn9s' -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' --config='/tmp/openshift/test-end-to-end-docker//openshift.local.config/master/admin.kubeconfig' | grep -qi true' after 1 seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if we're waiting for the version, the waits make sense, I was referring to all the additions in the helper function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've simplified a lot the wait function. Kudos to @liggitt for the suggestions.
Evaluated for origin merge up to ebc85ed |
[testextended][extended:core(builds)] |
Evaluated for origin testextended up to ebc85ed |
continuous-integration/openshift-jenkins/testextended Running (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin_extended/459/) (Extended Tests: core(builds), core(images)) |
@@ -660,7 +660,9 @@ function install_registry() { | |||
readonly -f install_registry | |||
|
|||
function wait_for_registry() { | |||
wait_for_command "oc get endpoints docker-registry --template='{{ len .subsets }}' --config='${ADMIN_KUBECONFIG}' | grep -q '[1-9][0-9]*'" $((5*TIME_MIN)) | |||
local generation=$(oc get dc/docker-registry -o jsonpath='{.metadata.generation}') | |||
local onereplicajs='{.status.observedGeneration},{.status.replicas},{.status.updatedReplicas},{.status.availableReplicas}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote expansions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, thats for the line above this
Evaluated for origin test up to ebc85ed |
os::cmd::expect_success 'oc login -u schema2-user -p pass' | ||
os::cmd::expect_success "oc new-project schema2tagged" | ||
os::cmd::expect_success "oc tag --source=istag schema2/busybox:latest busybox:latest" | ||
busybox_name=$(oc get -o 'jsonpath={.image.metadata.name}' istag busybox:latest) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quotes
Since e2e was supposed to be a "user process" script... Is that the right place to put all these new tests? |
See comment about a follow up to split these into a registry extended test |
continuous-integration/openshift-jenkins/merge FAILURE (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/8710/) |
Looks like this picked up a spurious
|
continuous-integration/openshift-jenkins/test FAILURE (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/8711/) |
superceded by #10805 |
Resolves #10730
Manifest V2 schema 2 introduces a special blob called config which used to be emeedded in earlier schema. See upstream's spec for details. The config is stored as a regular blob on registry's storage. Thus the config needs to be treated similar to a regular layer:
The PR addresses the first 3 items. I'd like to cover the pruning case in a follow-up.