-
-
- IRC
- |
-
- #docker-distribution on FreeNode
- |
-
-
-
- Issue Tracker
- |
-
- github.com/docker/distribution/issues
- |
-
-
-
- Google Groups
- |
-
- https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution
- |
-
-
-
- Mailing List
- |
-
- docker@dockerproject.org
- |
-
-
-
-
-## License
-
-This project is distributed under [Apache License, Version 2.0](LICENSE).
diff --git a/vendor/github.com/docker/distribution/RELEASE-CHECKLIST.md b/vendor/github.com/docker/distribution/RELEASE-CHECKLIST.md
deleted file mode 100644
index 73eba5a8..00000000
--- a/vendor/github.com/docker/distribution/RELEASE-CHECKLIST.md
+++ /dev/null
@@ -1,44 +0,0 @@
-## Registry Release Checklist
-
-10. Compile release notes detailing features and since the last release.
-
- Update the `CHANGELOG.md` file and create a PR to master with the updates.
-Once that PR has been approved by maintainers the change may be cherry-picked
-to the release branch (new release branches may be forked from this commit).
-
-20. Update the version file: `https://github.com/docker/distribution/blob/master/version/version.go`
-
-30. Update the `MAINTAINERS` (if necessary), `AUTHORS` and `.mailmap` files.
-
-```
-make AUTHORS
-```
-
-40. Create a signed tag.
-
- Distribution uses semantic versioning. Tags are of the format
-`vx.y.z[-rcn]`. You will need PGP installed and a PGP key which has been added
-to your Github account. The comment for the tag should include the release
-notes, use previous tags as a guide for formatting consistently. Run
-`git tag -s vx.y.z[-rcn]` to create tag and `git -v vx.y.z[-rcn]` to verify tag,
-check comment and correct commit hash.
-
-50. Push the signed tag
-
-60. Create a new [release](https://github.com/docker/distribution/releases). In the case of a release candidate, tick the `pre-release` checkbox.
-
-70. Update the registry binary in [distribution library image repo](https://github.com/docker/distribution-library-image) by running the update script and opening a pull request.
-
-80. Update the official image. Add the new version in the [official images repo](https://github.com/docker-library/official-images) by appending a new version to the `registry/registry` file with the git hash pointed to by the signed tag. Update the major version to point to the latest version and the minor version to point to new patch release if necessary.
-e.g. to release `2.3.1`
-
- `2.3.1 (new)`
-
- `2.3.0 -> 2.3.0` can be removed
-
- `2 -> 2.3.1`
-
- `2.3 -> 2.3.1`
-
-90. Build a new distribution/registry image on [Docker hub](https://hub.docker.com/u/distribution/dashboard) by adding a new automated build with the new tag and re-building the images.
-
diff --git a/vendor/github.com/docker/distribution/ROADMAP.md b/vendor/github.com/docker/distribution/ROADMAP.md
deleted file mode 100644
index 701127af..00000000
--- a/vendor/github.com/docker/distribution/ROADMAP.md
+++ /dev/null
@@ -1,267 +0,0 @@
-# Roadmap
-
-The Distribution Project consists of several components, some of which are
-still being defined. This document defines the high-level goals of the
-project, identifies the current components, and defines the release-
-relationship to the Docker Platform.
-
-* [Distribution Goals](#distribution-goals)
-* [Distribution Components](#distribution-components)
-* [Project Planning](#project-planning): release-relationship to the Docker Platform.
-
-This road map is a living document, providing an overview of the goals and
-considerations made in respect of the future of the project.
-
-## Distribution Goals
-
-- Replace the existing [docker registry](github.com/docker/docker-registry)
- implementation as the primary implementation.
-- Replace the existing push and pull code in the docker engine with the
- distribution package.
-- Define a strong data model for distributing docker images
-- Provide a flexible distribution tool kit for use in the docker platform
-- Unlock new distribution models
-
-## Distribution Components
-
-Components of the Distribution Project are managed via github [milestones](https://github.com/docker/distribution/milestones). Upcoming
-features and bugfixes for a component will be added to the relevant milestone. If a feature or
-bugfix is not part of a milestone, it is currently unscheduled for
-implementation.
-
-* [Registry](#registry)
-* [Distribution Package](#distribution-package)
-
-***
-
-### Registry
-
-The new Docker registry is the main portion of the distribution repository.
-Registry 2.0 is the first release of the next-generation registry. This was
-primarily focused on implementing the [new registry
-API](https://github.com/docker/distribution/blob/master/docs/spec/api.md),
-with a focus on security and performance.
-
-Following from the Distribution project goals above, we have a set of goals
-for registry v2 that we would like to follow in the design. New features
-should be compared against these goals.
-
-#### Data Storage and Distribution First
-
-The registry's first goal is to provide a reliable, consistent storage
-location for Docker images. The registry should only provide the minimal
-amount of indexing required to fetch image data and no more.
-
-This means we should be selective in new features and API additions, including
-those that may require expensive, ever growing indexes. Requests should be
-servable in "constant time".
-
-#### Content Addressability
-
-All data objects used in the registry API should be content addressable.
-Content identifiers should be secure and verifiable. This provides a secure,
-reliable base from which to build more advanced content distribution systems.
-
-#### Content Agnostic
-
-In the past, changes to the image format would require large changes in Docker
-and the Registry. By decoupling the distribution and image format, we can
-allow the formats to progress without having to coordinate between the two.
-This means that we should be focused on decoupling Docker from the registry
-just as much as decoupling the registry from Docker. Such an approach will
-allow us to unlock new distribution models that haven't been possible before.
-
-We can take this further by saying that the new registry should be content
-agnostic. The registry provides a model of names, tags, manifests and content
-addresses and that model can be used to work with content.
-
-#### Simplicity
-
-The new registry should be closer to a microservice component than its
-predecessor. This means it should have a narrower API and a low number of
-service dependencies. It should be easy to deploy.
-
-This means that other solutions should be explored before changing the API or
-adding extra dependencies. If functionality is required, can it be added as an
-extension or companion service.
-
-#### Extensibility
-
-The registry should provide extension points to add functionality. By keeping
-the scope narrow, but providing the ability to add functionality.
-
-Features like search, indexing, synchronization and registry explorers fall
-into this category. No such feature should be added unless we've found it
-impossible to do through an extension.
-
-#### Active Feature Discussions
-
-The following are feature discussions that are currently active.
-
-If you don't see your favorite, unimplemented feature, feel free to contact us
-via IRC or the mailing list and we can talk about adding it. The goal here is
-to make sure that new features go through a rigid design process before
-landing in the registry.
-
-##### Proxying to other Registries
-
-A _pull-through caching_ mode exists for the registry, but is restricted from
-within the docker client to only mirror the official Docker Hub. This functionality
-can be expanded when image provenance has been specified and implemented in the
-distribution project.
-
-##### Metadata storage
-
-Metadata for the registry is currently stored with the manifest and layer data on
-the storage backend. While this is a big win for simplicity and reliably maintaining
-state, it comes with the cost of consistency and high latency. The mutable registry
-metadata operations should be abstracted behind an API which will allow ACID compliant
-storage systems to handle metadata.
-
-##### Peer to Peer transfer
-
-Discussion has started here: https://docs.google.com/document/d/1rYDpSpJiQWmCQy8Cuiaa3NH-Co33oK_SC9HeXYo87QA/edit
-
-##### Indexing, Search and Discovery
-
-The original registry provided some implementation of search for use with
-private registries. Support has been elided from V2 since we'd like to both
-decouple search functionality from the registry. The makes the registry
-simpler to deploy, especially in use cases where search is not needed, and
-let's us decouple the image format from the registry.
-
-There are explorations into using the catalog API and notification system to
-build external indexes. The current line of thought is that we will define a
-common search API to index and query docker images. Such a system could be run
-as a companion to a registry or set of registries to power discovery.
-
-The main issue with search and discovery is that there are so many ways to
-accomplish it. There are two aspects to this project. The first is deciding on
-how it will be done, including an API definition that can work with changing
-data formats. The second is the process of integrating with `docker search`.
-We expect that someone attempts to address the problem with the existing tools
-and propose it as a standard search API or uses it to inform a standardization
-process. Once this has been explored, we integrate with the docker client.
-
-Please see the following for more detail:
-
-- https://github.com/docker/distribution/issues/206
-
-##### Deletes
-
-> __NOTE:__ Deletes are a much asked for feature. Before requesting this
-feature or participating in discussion, we ask that you read this section in
-full and understand the problems behind deletes.
-
-While, at first glance, implementing deleting seems simple, there are a number
-mitigating factors that make many solutions not ideal or even pathological in
-the context of a registry. The following paragraph discuss the background and
-approaches that could be applied to arrive at a solution.
-
-The goal of deletes in any system is to remove unused or unneeded data. Only
-data requested for deletion should be removed and no other data. Removing
-unintended data is worse than _not_ removing data that was requested for
-removal but ideally, both are supported. Generally, according to this rule, we
-err on holding data longer than needed, ensuring that it is only removed when
-we can be certain that it can be removed. With the current behavior, we opt to
-hold onto the data forever, ensuring that data cannot be incorrectly removed.
-
-To understand the problems with implementing deletes, one must understand the
-data model. All registry data is stored in a filesystem layout, implemented on
-a "storage driver", effectively a _virtual file system_ (VFS). The storage
-system must assume that this VFS layer will be eventually consistent and has
-poor read- after-write consistency, since this is the lower common denominator
-among the storage drivers. This is mitigated by writing values in reverse-
-dependent order, but makes wider transactional operations unsafe.
-
-Layered on the VFS model is a content-addressable _directed, acyclic graph_
-(DAG) made up of blobs. Manifests reference layers. Tags reference manifests.
-Since the same data can be referenced by multiple manifests, we only store
-data once, even if it is in different repositories. Thus, we have a set of
-blobs, referenced by tags and manifests. If we want to delete a blob we need
-to be certain that it is no longer referenced by another manifest or tag. When
-we delete a manifest, we also can try to delete the referenced blobs. Deciding
-whether or not a blob has an active reference is the crux of the problem.
-
-Conceptually, deleting a manifest and its resources is quite simple. Just find
-all the manifests, enumerate the referenced blobs and delete the blobs not in
-that set. An astute observer will recognize this as a garbage collection
-problem. As with garbage collection in programming languages, this is very
-simple when one always has a consistent view. When one adds parallelism and an
-inconsistent view of data, it becomes very challenging.
-
-A simple example can demonstrate this. Let's say we are deleting a manifest
-_A_ in one process. We scan the manifest and decide that all the blobs are
-ready for deletion. Concurrently, we have another process accepting a new
-manifest _B_ referencing one or more blobs from the manifest _A_. Manifest _B_
-is accepted and all the blobs are considered present, so the operation
-proceeds. The original process then deletes the referenced blobs, assuming
-they were unreferenced. The manifest _B_, which we thought had all of its data
-present, can no longer be served by the registry, since the dependent data has
-been deleted.
-
-Deleting data from the registry safely requires some way to coordinate this
-operation. The following approaches are being considered:
-
-- _Reference Counting_ - Maintain a count of references to each blob. This is
- challenging for a number of reasons: 1. maintaining a consistent consensus
- of reference counts across a set of Registries and 2. Building the initial
- list of reference counts for an existing registry. These challenges can be
- met with a consensus protocol like Paxos or Raft in the first case and a
- necessary but simple scan in the second..
-- _Lock the World GC_ - Halt all writes to the data store. Walk the data store
- and find all blob references. Delete all unreferenced blobs. This approach
- is very simple but requires disabling writes for a period of time while the
- service reads all data. This is slow and expensive but very accurate and
- effective.
-- _Generational GC_ - Do something similar to above but instead of blocking
- writes, writes are sent to another storage backend while reads are broadcast
- to the new and old backends. GC is then performed on the read-only portion.
- Because writes land in the new backend, the data in the read-only section
- can be safely deleted. The main drawbacks of this approach are complexity
- and coordination.
-- _Centralized Oracle_ - Using a centralized, transactional database, we can
- know exactly which data is referenced at any given time. This avoids
- coordination problem by managing this data in a single location. We trade
- off metadata scalability for simplicity and performance. This is a very good
- option for most registry deployments. This would create a bottleneck for
- registry metadata. However, metadata is generally not the main bottleneck
- when serving images.
-
-Please let us know if other solutions exist that we have yet to enumerate.
-Note that for any approach, implementation is a massive consideration. For
-example, a mark-sweep based solution may seem simple but the amount of work in
-coordination offset the extra work it might take to build a _Centralized
-Oracle_. We'll accept proposals for any solution but please coordinate with us
-before dropping code.
-
-At this time, we have traded off simplicity and ease of deployment for disk
-space. Simplicity and ease of deployment tend to reduce developer involvement,
-which is currently the most expensive resource in software engineering. Taking
-on any solution for deletes will greatly effect these factors, trading off
-very cheap disk space for a complex deployment and operational story.
-
-Please see the following issues for more detail:
-
-- https://github.com/docker/distribution/issues/422
-- https://github.com/docker/distribution/issues/461
-- https://github.com/docker/distribution/issues/462
-
-### Distribution Package
-
-At its core, the Distribution Project is a set of Go packages that make up
-Distribution Components. At this time, most of these packages make up the
-Registry implementation.
-
-The package itself is considered unstable. If you're using it, please take care to vendor the dependent version.
-
-For feature additions, please see the Registry section. In the future, we may break out a
-separate Roadmap for distribution-specific features that apply to more than
-just the registry.
-
-***
-
-### Project Planning
-
-An [Open-Source Planning Process](https://github.com/docker/distribution/wiki/Open-Source-Planning-Process) is used to define the Roadmap. [Project Pages](https://github.com/docker/distribution/wiki) define the goals for each Milestone and identify current progress.
-
diff --git a/vendor/github.com/docker/distribution/blobs.go b/vendor/github.com/docker/distribution/blobs.go
deleted file mode 100644
index 145b0785..00000000
--- a/vendor/github.com/docker/distribution/blobs.go
+++ /dev/null
@@ -1,257 +0,0 @@
-package distribution
-
-import (
- "context"
- "errors"
- "fmt"
- "io"
- "net/http"
- "time"
-
- "github.com/docker/distribution/reference"
- "github.com/opencontainers/go-digest"
-)
-
-var (
- // ErrBlobExists returned when blob already exists
- ErrBlobExists = errors.New("blob exists")
-
- // ErrBlobDigestUnsupported when blob digest is an unsupported version.
- ErrBlobDigestUnsupported = errors.New("unsupported blob digest")
-
- // ErrBlobUnknown when blob is not found.
- ErrBlobUnknown = errors.New("unknown blob")
-
- // ErrBlobUploadUnknown returned when upload is not found.
- ErrBlobUploadUnknown = errors.New("blob upload unknown")
-
- // ErrBlobInvalidLength returned when the blob has an expected length on
- // commit, meaning mismatched with the descriptor or an invalid value.
- ErrBlobInvalidLength = errors.New("blob invalid length")
-)
-
-// ErrBlobInvalidDigest returned when digest check fails.
-type ErrBlobInvalidDigest struct {
- Digest digest.Digest
- Reason error
-}
-
-func (err ErrBlobInvalidDigest) Error() string {
- return fmt.Sprintf("invalid digest for referenced layer: %v, %v",
- err.Digest, err.Reason)
-}
-
-// ErrBlobMounted returned when a blob is mounted from another repository
-// instead of initiating an upload session.
-type ErrBlobMounted struct {
- From reference.Canonical
- Descriptor Descriptor
-}
-
-func (err ErrBlobMounted) Error() string {
- return fmt.Sprintf("blob mounted from: %v to: %v",
- err.From, err.Descriptor)
-}
-
-// Descriptor describes targeted content. Used in conjunction with a blob
-// store, a descriptor can be used to fetch, store and target any kind of
-// blob. The struct also describes the wire protocol format. Fields should
-// only be added but never changed.
-type Descriptor struct {
- // MediaType describe the type of the content. All text based formats are
- // encoded as utf-8.
- MediaType string `json:"mediaType,omitempty"`
-
- // Size in bytes of content.
- Size int64 `json:"size,omitempty"`
-
- // Digest uniquely identifies the content. A byte stream can be verified
- // against against this digest.
- Digest digest.Digest `json:"digest,omitempty"`
-
- // URLs contains the source URLs of this content.
- URLs []string `json:"urls,omitempty"`
-
- // NOTE: Before adding a field here, please ensure that all
- // other options have been exhausted. Much of the type relationships
- // depend on the simplicity of this type.
-}
-
-// Descriptor returns the descriptor, to make it satisfy the Describable
-// interface. Note that implementations of Describable are generally objects
-// which can be described, not simply descriptors; this exception is in place
-// to make it more convenient to pass actual descriptors to functions that
-// expect Describable objects.
-func (d Descriptor) Descriptor() Descriptor {
- return d
-}
-
-// BlobStatter makes blob descriptors available by digest. The service may
-// provide a descriptor of a different digest if the provided digest is not
-// canonical.
-type BlobStatter interface {
- // Stat provides metadata about a blob identified by the digest. If the
- // blob is unknown to the describer, ErrBlobUnknown will be returned.
- Stat(ctx context.Context, dgst digest.Digest) (Descriptor, error)
-}
-
-// BlobDeleter enables deleting blobs from storage.
-type BlobDeleter interface {
- Delete(ctx context.Context, dgst digest.Digest) error
-}
-
-// BlobEnumerator enables iterating over blobs from storage
-type BlobEnumerator interface {
- Enumerate(ctx context.Context, ingester func(dgst digest.Digest) error) error
-}
-
-// BlobDescriptorService manages metadata about a blob by digest. Most
-// implementations will not expose such an interface explicitly. Such mappings
-// should be maintained by interacting with the BlobIngester. Hence, this is
-// left off of BlobService and BlobStore.
-type BlobDescriptorService interface {
- BlobStatter
-
- // SetDescriptor assigns the descriptor to the digest. The provided digest and
- // the digest in the descriptor must map to identical content but they may
- // differ on their algorithm. The descriptor must have the canonical
- // digest of the content and the digest algorithm must match the
- // annotators canonical algorithm.
- //
- // Such a facility can be used to map blobs between digest domains, with
- // the restriction that the algorithm of the descriptor must match the
- // canonical algorithm (ie sha256) of the annotator.
- SetDescriptor(ctx context.Context, dgst digest.Digest, desc Descriptor) error
-
- // Clear enables descriptors to be unlinked
- Clear(ctx context.Context, dgst digest.Digest) error
-}
-
-// BlobDescriptorServiceFactory creates middleware for BlobDescriptorService.
-type BlobDescriptorServiceFactory interface {
- BlobAccessController(svc BlobDescriptorService) BlobDescriptorService
-}
-
-// ReadSeekCloser is the primary reader type for blob data, combining
-// io.ReadSeeker with io.Closer.
-type ReadSeekCloser interface {
- io.ReadSeeker
- io.Closer
-}
-
-// BlobProvider describes operations for getting blob data.
-type BlobProvider interface {
- // Get returns the entire blob identified by digest along with the descriptor.
- Get(ctx context.Context, dgst digest.Digest) ([]byte, error)
-
- // Open provides a ReadSeekCloser to the blob identified by the provided
- // descriptor. If the blob is not known to the service, an error will be
- // returned.
- Open(ctx context.Context, dgst digest.Digest) (ReadSeekCloser, error)
-}
-
-// BlobServer can serve blobs via http.
-type BlobServer interface {
- // ServeBlob attempts to serve the blob, identified by dgst, via http. The
- // service may decide to redirect the client elsewhere or serve the data
- // directly.
- //
- // This handler only issues successful responses, such as 2xx or 3xx,
- // meaning it serves data or issues a redirect. If the blob is not
- // available, an error will be returned and the caller may still issue a
- // response.
- //
- // The implementation may serve the same blob from a different digest
- // domain. The appropriate headers will be set for the blob, unless they
- // have already been set by the caller.
- ServeBlob(ctx context.Context, w http.ResponseWriter, r *http.Request, dgst digest.Digest) error
-}
-
-// BlobIngester ingests blob data.
-type BlobIngester interface {
- // Put inserts the content p into the blob service, returning a descriptor
- // or an error.
- Put(ctx context.Context, mediaType string, p []byte) (Descriptor, error)
-
- // Create allocates a new blob writer to add a blob to this service. The
- // returned handle can be written to and later resumed using an opaque
- // identifier. With this approach, one can Close and Resume a BlobWriter
- // multiple times until the BlobWriter is committed or cancelled.
- Create(ctx context.Context, options ...BlobCreateOption) (BlobWriter, error)
-
- // Resume attempts to resume a write to a blob, identified by an id.
- Resume(ctx context.Context, id string) (BlobWriter, error)
-}
-
-// BlobCreateOption is a general extensible function argument for blob creation
-// methods. A BlobIngester may choose to honor any or none of the given
-// BlobCreateOptions, which can be specific to the implementation of the
-// BlobIngester receiving them.
-// TODO (brianbland): unify this with ManifestServiceOption in the future
-type BlobCreateOption interface {
- Apply(interface{}) error
-}
-
-// CreateOptions is a collection of blob creation modifiers relevant to general
-// blob storage intended to be configured by the BlobCreateOption.Apply method.
-type CreateOptions struct {
- Mount struct {
- ShouldMount bool
- From reference.Canonical
- // Stat allows to pass precalculated descriptor to link and return.
- // Blob access check will be skipped if set.
- Stat *Descriptor
- }
-}
-
-// BlobWriter provides a handle for inserting data into a blob store.
-// Instances should be obtained from BlobWriteService.Writer and
-// BlobWriteService.Resume. If supported by the store, a writer can be
-// recovered with the id.
-type BlobWriter interface {
- io.WriteCloser
- io.ReaderFrom
-
- // Size returns the number of bytes written to this blob.
- Size() int64
-
- // ID returns the identifier for this writer. The ID can be used with the
- // Blob service to later resume the write.
- ID() string
-
- // StartedAt returns the time this blob write was started.
- StartedAt() time.Time
-
- // Commit completes the blob writer process. The content is verified
- // against the provided provisional descriptor, which may result in an
- // error. Depending on the implementation, written data may be validated
- // against the provisional descriptor fields. If MediaType is not present,
- // the implementation may reject the commit or assign "application/octet-
- // stream" to the blob. The returned descriptor may have a different
- // digest depending on the blob store, referred to as the canonical
- // descriptor.
- Commit(ctx context.Context, provisional Descriptor) (canonical Descriptor, err error)
-
- // Cancel ends the blob write without storing any data and frees any
- // associated resources. Any data written thus far will be lost. Cancel
- // implementations should allow multiple calls even after a commit that
- // result in a no-op. This allows use of Cancel in a defer statement,
- // increasing the assurance that it is correctly called.
- Cancel(ctx context.Context) error
-}
-
-// BlobService combines the operations to access, read and write blobs. This
-// can be used to describe remote blob services.
-type BlobService interface {
- BlobStatter
- BlobProvider
- BlobIngester
-}
-
-// BlobStore represent the entire suite of blob related operations. Such an
-// implementation can access, read, write, delete and serve blobs.
-type BlobStore interface {
- BlobService
- BlobServer
- BlobDeleter
-}
diff --git a/vendor/github.com/docker/distribution/circle.yml b/vendor/github.com/docker/distribution/circle.yml
deleted file mode 100644
index ddc76c86..00000000
--- a/vendor/github.com/docker/distribution/circle.yml
+++ /dev/null
@@ -1,94 +0,0 @@
-# Pony-up!
-machine:
- pre:
- # Install gvm
- - bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/1.0.22/binscripts/gvm-installer)
- # Install codecov for coverage
- - pip install --user codecov
-
- post:
- # go
- - gvm install go1.8 --prefer-binary --name=stable
-
- environment:
- # Convenient shortcuts to "common" locations
- CHECKOUT: /home/ubuntu/$CIRCLE_PROJECT_REPONAME
- BASE_DIR: src/github.com/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME
- # Trick circle brainflat "no absolute path" behavior
- BASE_STABLE: ../../../$HOME/.gvm/pkgsets/stable/global/$BASE_DIR
- DOCKER_BUILDTAGS: "include_oss include_gcs"
- # Workaround Circle parsing dumb bugs and/or YAML wonkyness
- CIRCLE_PAIN: "mode: set"
-
- hosts:
- # Not used yet
- fancy: 127.0.0.1
-
-dependencies:
- pre:
- # Copy the code to the gopath of all go versions
- - >
- gvm use stable &&
- mkdir -p "$(dirname $BASE_STABLE)" &&
- cp -R "$CHECKOUT" "$BASE_STABLE"
-
- override:
- # Install dependencies for every copied clone/go version
- - gvm use stable && go get github.com/lk4d4/vndr:
- pwd: $BASE_STABLE
-
- post:
- # For the stable go version, additionally install linting tools
- - >
- gvm use stable &&
- go get github.com/axw/gocov/gocov github.com/golang/lint/golint
-
-test:
- pre:
- # Output the go versions we are going to test
- # - gvm use old && go version
- - gvm use stable && go version
-
- # Ensure validation of dependencies
- - git fetch origin:
- pwd: $BASE_STABLE
- - gvm use stable && if test -n "`git diff --stat=1000 origin/master | grep -E \"^[[:space:]]*vendor\"`"; then make dep-validate; fi:
- pwd: $BASE_STABLE
-
- # First thing: build everything. This will catch compile errors, and it's
- # also necessary for go vet to work properly (see #807).
- - gvm use stable && go install $(go list ./... | grep -v "/vendor/"):
- pwd: $BASE_STABLE
-
- # FMT
- - gvm use stable && make fmt:
- pwd: $BASE_STABLE
-
- # VET
- - gvm use stable && make vet:
- pwd: $BASE_STABLE
-
- # LINT
- - gvm use stable && make lint:
- pwd: $BASE_STABLE
-
- override:
- # Test stable, and report
- - gvm use stable; export ROOT_PACKAGE=$(go list .); go list -tags "$DOCKER_BUILDTAGS" ./... | grep -v "/vendor/" | xargs -L 1 -I{} bash -c 'export PACKAGE={}; go test -tags "$DOCKER_BUILDTAGS" -test.short -coverprofile=$GOPATH/src/$PACKAGE/coverage.out -coverpkg=$(./coverpkg.sh $PACKAGE $ROOT_PACKAGE) $PACKAGE':
- timeout: 1000
- pwd: $BASE_STABLE
-
- # Test stable with race
- - gvm use stable; export ROOT_PACKAGE=$(go list .); go list -tags "$DOCKER_BUILDTAGS" ./... | grep -v "/vendor/" | grep -v "registry/handlers" | grep -v "registry/storage/driver" | xargs -L 1 -I{} bash -c 'export PACKAGE={}; go test -race -tags "$DOCKER_BUILDTAGS" -test.short $PACKAGE':
- timeout: 1000
- pwd: $BASE_STABLE
- post:
- # Report to codecov
- - bash <(curl -s https://codecov.io/bash):
- pwd: $BASE_STABLE
-
- ## Notes
- # Do we want these as well?
- # - go get code.google.com/p/go.tools/cmd/goimports
- # - test -z "$(goimports -l -w ./... | tee /dev/stderr)"
- # http://labix.org/gocheck
diff --git a/vendor/github.com/docker/distribution/coverpkg.sh b/vendor/github.com/docker/distribution/coverpkg.sh
deleted file mode 100755
index 25d419ae..00000000
--- a/vendor/github.com/docker/distribution/coverpkg.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/usr/bin/env bash
-# Given a subpackage and the containing package, figures out which packages
-# need to be passed to `go test -coverpkg`: this includes all of the
-# subpackage's dependencies within the containing package, as well as the
-# subpackage itself.
-DEPENDENCIES="$(go list -f $'{{range $f := .Deps}}{{$f}}\n{{end}}' ${1} | grep ${2} | grep -v github.com/docker/distribution/vendor)"
-echo "${1} ${DEPENDENCIES}" | xargs echo -n | tr ' ' ','
diff --git a/vendor/github.com/docker/distribution/digestset/set.go b/vendor/github.com/docker/distribution/digestset/set.go
deleted file mode 100644
index 71327dca..00000000
--- a/vendor/github.com/docker/distribution/digestset/set.go
+++ /dev/null
@@ -1,247 +0,0 @@
-package digestset
-
-import (
- "errors"
- "sort"
- "strings"
- "sync"
-
- digest "github.com/opencontainers/go-digest"
-)
-
-var (
- // ErrDigestNotFound is used when a matching digest
- // could not be found in a set.
- ErrDigestNotFound = errors.New("digest not found")
-
- // ErrDigestAmbiguous is used when multiple digests
- // are found in a set. None of the matching digests
- // should be considered valid matches.
- ErrDigestAmbiguous = errors.New("ambiguous digest string")
-)
-
-// Set is used to hold a unique set of digests which
-// may be easily referenced by easily referenced by a string
-// representation of the digest as well as short representation.
-// The uniqueness of the short representation is based on other
-// digests in the set. If digests are omitted from this set,
-// collisions in a larger set may not be detected, therefore it
-// is important to always do short representation lookups on
-// the complete set of digests. To mitigate collisions, an
-// appropriately long short code should be used.
-type Set struct {
- mutex sync.RWMutex
- entries digestEntries
-}
-
-// NewSet creates an empty set of digests
-// which may have digests added.
-func NewSet() *Set {
- return &Set{
- entries: digestEntries{},
- }
-}
-
-// checkShortMatch checks whether two digests match as either whole
-// values or short values. This function does not test equality,
-// rather whether the second value could match against the first
-// value.
-func checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool {
- if len(hex) == len(shortHex) {
- if hex != shortHex {
- return false
- }
- if len(shortAlg) > 0 && string(alg) != shortAlg {
- return false
- }
- } else if !strings.HasPrefix(hex, shortHex) {
- return false
- } else if len(shortAlg) > 0 && string(alg) != shortAlg {
- return false
- }
- return true
-}
-
-// Lookup looks for a digest matching the given string representation.
-// If no digests could be found ErrDigestNotFound will be returned
-// with an empty digest value. If multiple matches are found
-// ErrDigestAmbiguous will be returned with an empty digest value.
-func (dst *Set) Lookup(d string) (digest.Digest, error) {
- dst.mutex.RLock()
- defer dst.mutex.RUnlock()
- if len(dst.entries) == 0 {
- return "", ErrDigestNotFound
- }
- var (
- searchFunc func(int) bool
- alg digest.Algorithm
- hex string
- )
- dgst, err := digest.Parse(d)
- if err == digest.ErrDigestInvalidFormat {
- hex = d
- searchFunc = func(i int) bool {
- return dst.entries[i].val >= d
- }
- } else {
- hex = dgst.Hex()
- alg = dgst.Algorithm()
- searchFunc = func(i int) bool {
- if dst.entries[i].val == hex {
- return dst.entries[i].alg >= alg
- }
- return dst.entries[i].val >= hex
- }
- }
- idx := sort.Search(len(dst.entries), searchFunc)
- if idx == len(dst.entries) || !checkShortMatch(dst.entries[idx].alg, dst.entries[idx].val, string(alg), hex) {
- return "", ErrDigestNotFound
- }
- if dst.entries[idx].alg == alg && dst.entries[idx].val == hex {
- return dst.entries[idx].digest, nil
- }
- if idx+1 < len(dst.entries) && checkShortMatch(dst.entries[idx+1].alg, dst.entries[idx+1].val, string(alg), hex) {
- return "", ErrDigestAmbiguous
- }
-
- return dst.entries[idx].digest, nil
-}
-
-// Add adds the given digest to the set. An error will be returned
-// if the given digest is invalid. If the digest already exists in the
-// set, this operation will be a no-op.
-func (dst *Set) Add(d digest.Digest) error {
- if err := d.Validate(); err != nil {
- return err
- }
- dst.mutex.Lock()
- defer dst.mutex.Unlock()
- entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
- searchFunc := func(i int) bool {
- if dst.entries[i].val == entry.val {
- return dst.entries[i].alg >= entry.alg
- }
- return dst.entries[i].val >= entry.val
- }
- idx := sort.Search(len(dst.entries), searchFunc)
- if idx == len(dst.entries) {
- dst.entries = append(dst.entries, entry)
- return nil
- } else if dst.entries[idx].digest == d {
- return nil
- }
-
- entries := append(dst.entries, nil)
- copy(entries[idx+1:], entries[idx:len(entries)-1])
- entries[idx] = entry
- dst.entries = entries
- return nil
-}
-
-// Remove removes the given digest from the set. An err will be
-// returned if the given digest is invalid. If the digest does
-// not exist in the set, this operation will be a no-op.
-func (dst *Set) Remove(d digest.Digest) error {
- if err := d.Validate(); err != nil {
- return err
- }
- dst.mutex.Lock()
- defer dst.mutex.Unlock()
- entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
- searchFunc := func(i int) bool {
- if dst.entries[i].val == entry.val {
- return dst.entries[i].alg >= entry.alg
- }
- return dst.entries[i].val >= entry.val
- }
- idx := sort.Search(len(dst.entries), searchFunc)
- // Not found if idx is after or value at idx is not digest
- if idx == len(dst.entries) || dst.entries[idx].digest != d {
- return nil
- }
-
- entries := dst.entries
- copy(entries[idx:], entries[idx+1:])
- entries = entries[:len(entries)-1]
- dst.entries = entries
-
- return nil
-}
-
-// All returns all the digests in the set
-func (dst *Set) All() []digest.Digest {
- dst.mutex.RLock()
- defer dst.mutex.RUnlock()
- retValues := make([]digest.Digest, len(dst.entries))
- for i := range dst.entries {
- retValues[i] = dst.entries[i].digest
- }
-
- return retValues
-}
-
-// ShortCodeTable returns a map of Digest to unique short codes. The
-// length represents the minimum value, the maximum length may be the
-// entire value of digest if uniqueness cannot be achieved without the
-// full value. This function will attempt to make short codes as short
-// as possible to be unique.
-func ShortCodeTable(dst *Set, length int) map[digest.Digest]string {
- dst.mutex.RLock()
- defer dst.mutex.RUnlock()
- m := make(map[digest.Digest]string, len(dst.entries))
- l := length
- resetIdx := 0
- for i := 0; i < len(dst.entries); i++ {
- var short string
- extended := true
- for extended {
- extended = false
- if len(dst.entries[i].val) <= l {
- short = dst.entries[i].digest.String()
- } else {
- short = dst.entries[i].val[:l]
- for j := i + 1; j < len(dst.entries); j++ {
- if checkShortMatch(dst.entries[j].alg, dst.entries[j].val, "", short) {
- if j > resetIdx {
- resetIdx = j
- }
- extended = true
- } else {
- break
- }
- }
- if extended {
- l++
- }
- }
- }
- m[dst.entries[i].digest] = short
- if i >= resetIdx {
- l = length
- }
- }
- return m
-}
-
-type digestEntry struct {
- alg digest.Algorithm
- val string
- digest digest.Digest
-}
-
-type digestEntries []*digestEntry
-
-func (d digestEntries) Len() int {
- return len(d)
-}
-
-func (d digestEntries) Less(i, j int) bool {
- if d[i].val != d[j].val {
- return d[i].val < d[j].val
- }
- return d[i].alg < d[j].alg
-}
-
-func (d digestEntries) Swap(i, j int) {
- d[i], d[j] = d[j], d[i]
-}
diff --git a/vendor/github.com/docker/distribution/doc.go b/vendor/github.com/docker/distribution/doc.go
deleted file mode 100644
index bdd8cb70..00000000
--- a/vendor/github.com/docker/distribution/doc.go
+++ /dev/null
@@ -1,7 +0,0 @@
-// Package distribution will define the interfaces for the components of
-// docker distribution. The goal is to allow users to reliably package, ship
-// and store content related to docker images.
-//
-// This is currently a work in progress. More details are available in the
-// README.md.
-package distribution
diff --git a/vendor/github.com/docker/distribution/errors.go b/vendor/github.com/docker/distribution/errors.go
deleted file mode 100644
index 020d3325..00000000
--- a/vendor/github.com/docker/distribution/errors.go
+++ /dev/null
@@ -1,115 +0,0 @@
-package distribution
-
-import (
- "errors"
- "fmt"
- "strings"
-
- "github.com/opencontainers/go-digest"
-)
-
-// ErrAccessDenied is returned when an access to a requested resource is
-// denied.
-var ErrAccessDenied = errors.New("access denied")
-
-// ErrManifestNotModified is returned when a conditional manifest GetByTag
-// returns nil due to the client indicating it has the latest version
-var ErrManifestNotModified = errors.New("manifest not modified")
-
-// ErrUnsupported is returned when an unimplemented or unsupported action is
-// performed
-var ErrUnsupported = errors.New("operation unsupported")
-
-// ErrTagUnknown is returned if the given tag is not known by the tag service
-type ErrTagUnknown struct {
- Tag string
-}
-
-func (err ErrTagUnknown) Error() string {
- return fmt.Sprintf("unknown tag=%s", err.Tag)
-}
-
-// ErrRepositoryUnknown is returned if the named repository is not known by
-// the registry.
-type ErrRepositoryUnknown struct {
- Name string
-}
-
-func (err ErrRepositoryUnknown) Error() string {
- return fmt.Sprintf("unknown repository name=%s", err.Name)
-}
-
-// ErrRepositoryNameInvalid should be used to denote an invalid repository
-// name. Reason may set, indicating the cause of invalidity.
-type ErrRepositoryNameInvalid struct {
- Name string
- Reason error
-}
-
-func (err ErrRepositoryNameInvalid) Error() string {
- return fmt.Sprintf("repository name %q invalid: %v", err.Name, err.Reason)
-}
-
-// ErrManifestUnknown is returned if the manifest is not known by the
-// registry.
-type ErrManifestUnknown struct {
- Name string
- Tag string
-}
-
-func (err ErrManifestUnknown) Error() string {
- return fmt.Sprintf("unknown manifest name=%s tag=%s", err.Name, err.Tag)
-}
-
-// ErrManifestUnknownRevision is returned when a manifest cannot be found by
-// revision within a repository.
-type ErrManifestUnknownRevision struct {
- Name string
- Revision digest.Digest
-}
-
-func (err ErrManifestUnknownRevision) Error() string {
- return fmt.Sprintf("unknown manifest name=%s revision=%s", err.Name, err.Revision)
-}
-
-// ErrManifestUnverified is returned when the registry is unable to verify
-// the manifest.
-type ErrManifestUnverified struct{}
-
-func (ErrManifestUnverified) Error() string {
- return "unverified manifest"
-}
-
-// ErrManifestVerification provides a type to collect errors encountered
-// during manifest verification. Currently, it accepts errors of all types,
-// but it may be narrowed to those involving manifest verification.
-type ErrManifestVerification []error
-
-func (errs ErrManifestVerification) Error() string {
- var parts []string
- for _, err := range errs {
- parts = append(parts, err.Error())
- }
-
- return fmt.Sprintf("errors verifying manifest: %v", strings.Join(parts, ","))
-}
-
-// ErrManifestBlobUnknown returned when a referenced blob cannot be found.
-type ErrManifestBlobUnknown struct {
- Digest digest.Digest
-}
-
-func (err ErrManifestBlobUnknown) Error() string {
- return fmt.Sprintf("unknown blob %v on manifest", err.Digest)
-}
-
-// ErrManifestNameInvalid should be used to denote an invalid manifest
-// name. Reason may set, indicating the cause of invalidity.
-type ErrManifestNameInvalid struct {
- Name string
- Reason error
-}
-
-func (err ErrManifestNameInvalid) Error() string {
- return fmt.Sprintf("manifest name %q invalid: %v", err.Name, err.Reason)
-}
diff --git a/vendor/github.com/docker/distribution/manifest/doc.go b/vendor/github.com/docker/distribution/manifest/doc.go
deleted file mode 100644
index 88367b0a..00000000
--- a/vendor/github.com/docker/distribution/manifest/doc.go
+++ /dev/null
@@ -1 +0,0 @@
-package manifest
diff --git a/vendor/github.com/docker/distribution/manifest/schema1/config_builder.go b/vendor/github.com/docker/distribution/manifest/schema1/config_builder.go
deleted file mode 100644
index a96dc3d2..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema1/config_builder.go
+++ /dev/null
@@ -1,287 +0,0 @@
-package schema1
-
-import (
- "context"
- "crypto/sha512"
- "encoding/json"
- "errors"
- "fmt"
- "time"
-
- "github.com/docker/distribution"
- "github.com/docker/distribution/manifest"
- "github.com/docker/distribution/reference"
- "github.com/docker/libtrust"
- "github.com/opencontainers/go-digest"
-)
-
-type diffID digest.Digest
-
-// gzippedEmptyTar is a gzip-compressed version of an empty tar file
-// (1024 NULL bytes)
-var gzippedEmptyTar = []byte{
- 31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
- 0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
-}
-
-// digestSHA256GzippedEmptyTar is the canonical sha256 digest of
-// gzippedEmptyTar
-const digestSHA256GzippedEmptyTar = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
-
-// configManifestBuilder is a type for constructing manifests from an image
-// configuration and generic descriptors.
-type configManifestBuilder struct {
- // bs is a BlobService used to create empty layer tars in the
- // blob store if necessary.
- bs distribution.BlobService
- // pk is the libtrust private key used to sign the final manifest.
- pk libtrust.PrivateKey
- // configJSON is configuration supplied when the ManifestBuilder was
- // created.
- configJSON []byte
- // ref contains the name and optional tag provided to NewConfigManifestBuilder.
- ref reference.Named
- // descriptors is the set of descriptors referencing the layers.
- descriptors []distribution.Descriptor
- // emptyTarDigest is set to a valid digest if an empty tar has been
- // put in the blob store; otherwise it is empty.
- emptyTarDigest digest.Digest
-}
-
-// NewConfigManifestBuilder is used to build new manifests for the current
-// schema version from an image configuration and a set of descriptors.
-// It takes a BlobService so that it can add an empty tar to the blob store
-// if the resulting manifest needs empty layers.
-func NewConfigManifestBuilder(bs distribution.BlobService, pk libtrust.PrivateKey, ref reference.Named, configJSON []byte) distribution.ManifestBuilder {
- return &configManifestBuilder{
- bs: bs,
- pk: pk,
- configJSON: configJSON,
- ref: ref,
- }
-}
-
-// Build produces a final manifest from the given references
-func (mb *configManifestBuilder) Build(ctx context.Context) (m distribution.Manifest, err error) {
- type imageRootFS struct {
- Type string `json:"type"`
- DiffIDs []diffID `json:"diff_ids,omitempty"`
- BaseLayer string `json:"base_layer,omitempty"`
- }
-
- type imageHistory struct {
- Created time.Time `json:"created"`
- Author string `json:"author,omitempty"`
- CreatedBy string `json:"created_by,omitempty"`
- Comment string `json:"comment,omitempty"`
- EmptyLayer bool `json:"empty_layer,omitempty"`
- }
-
- type imageConfig struct {
- RootFS *imageRootFS `json:"rootfs,omitempty"`
- History []imageHistory `json:"history,omitempty"`
- Architecture string `json:"architecture,omitempty"`
- }
-
- var img imageConfig
-
- if err := json.Unmarshal(mb.configJSON, &img); err != nil {
- return nil, err
- }
-
- if len(img.History) == 0 {
- return nil, errors.New("empty history when trying to create schema1 manifest")
- }
-
- if len(img.RootFS.DiffIDs) != len(mb.descriptors) {
- return nil, fmt.Errorf("number of descriptors and number of layers in rootfs must match: len(%v) != len(%v)", img.RootFS.DiffIDs, mb.descriptors)
- }
-
- // Generate IDs for each layer
- // For non-top-level layers, create fake V1Compatibility strings that
- // fit the format and don't collide with anything else, but don't
- // result in runnable images on their own.
- type v1Compatibility struct {
- ID string `json:"id"`
- Parent string `json:"parent,omitempty"`
- Comment string `json:"comment,omitempty"`
- Created time.Time `json:"created"`
- ContainerConfig struct {
- Cmd []string
- } `json:"container_config,omitempty"`
- Author string `json:"author,omitempty"`
- ThrowAway bool `json:"throwaway,omitempty"`
- }
-
- fsLayerList := make([]FSLayer, len(img.History))
- history := make([]History, len(img.History))
-
- parent := ""
- layerCounter := 0
- for i, h := range img.History[:len(img.History)-1] {
- var blobsum digest.Digest
- if h.EmptyLayer {
- if blobsum, err = mb.emptyTar(ctx); err != nil {
- return nil, err
- }
- } else {
- if len(img.RootFS.DiffIDs) <= layerCounter {
- return nil, errors.New("too many non-empty layers in History section")
- }
- blobsum = mb.descriptors[layerCounter].Digest
- layerCounter++
- }
-
- v1ID := digest.FromBytes([]byte(blobsum.Hex() + " " + parent)).Hex()
-
- if i == 0 && img.RootFS.BaseLayer != "" {
- // windows-only baselayer setup
- baseID := sha512.Sum384([]byte(img.RootFS.BaseLayer))
- parent = fmt.Sprintf("%x", baseID[:32])
- }
-
- v1Compatibility := v1Compatibility{
- ID: v1ID,
- Parent: parent,
- Comment: h.Comment,
- Created: h.Created,
- Author: h.Author,
- }
- v1Compatibility.ContainerConfig.Cmd = []string{img.History[i].CreatedBy}
- if h.EmptyLayer {
- v1Compatibility.ThrowAway = true
- }
- jsonBytes, err := json.Marshal(&v1Compatibility)
- if err != nil {
- return nil, err
- }
-
- reversedIndex := len(img.History) - i - 1
- history[reversedIndex].V1Compatibility = string(jsonBytes)
- fsLayerList[reversedIndex] = FSLayer{BlobSum: blobsum}
-
- parent = v1ID
- }
-
- latestHistory := img.History[len(img.History)-1]
-
- var blobsum digest.Digest
- if latestHistory.EmptyLayer {
- if blobsum, err = mb.emptyTar(ctx); err != nil {
- return nil, err
- }
- } else {
- if len(img.RootFS.DiffIDs) <= layerCounter {
- return nil, errors.New("too many non-empty layers in History section")
- }
- blobsum = mb.descriptors[layerCounter].Digest
- }
-
- fsLayerList[0] = FSLayer{BlobSum: blobsum}
- dgst := digest.FromBytes([]byte(blobsum.Hex() + " " + parent + " " + string(mb.configJSON)))
-
- // Top-level v1compatibility string should be a modified version of the
- // image config.
- transformedConfig, err := MakeV1ConfigFromConfig(mb.configJSON, dgst.Hex(), parent, latestHistory.EmptyLayer)
- if err != nil {
- return nil, err
- }
-
- history[0].V1Compatibility = string(transformedConfig)
-
- tag := ""
- if tagged, isTagged := mb.ref.(reference.Tagged); isTagged {
- tag = tagged.Tag()
- }
-
- mfst := Manifest{
- Versioned: manifest.Versioned{
- SchemaVersion: 1,
- },
- Name: mb.ref.Name(),
- Tag: tag,
- Architecture: img.Architecture,
- FSLayers: fsLayerList,
- History: history,
- }
-
- return Sign(&mfst, mb.pk)
-}
-
-// emptyTar pushes a compressed empty tar to the blob store if one doesn't
-// already exist, and returns its blobsum.
-func (mb *configManifestBuilder) emptyTar(ctx context.Context) (digest.Digest, error) {
- if mb.emptyTarDigest != "" {
- // Already put an empty tar
- return mb.emptyTarDigest, nil
- }
-
- descriptor, err := mb.bs.Stat(ctx, digestSHA256GzippedEmptyTar)
- switch err {
- case nil:
- mb.emptyTarDigest = descriptor.Digest
- return descriptor.Digest, nil
- case distribution.ErrBlobUnknown:
- // nop
- default:
- return "", err
- }
-
- // Add gzipped empty tar to the blob store
- descriptor, err = mb.bs.Put(ctx, "", gzippedEmptyTar)
- if err != nil {
- return "", err
- }
-
- mb.emptyTarDigest = descriptor.Digest
-
- return descriptor.Digest, nil
-}
-
-// AppendReference adds a reference to the current ManifestBuilder
-func (mb *configManifestBuilder) AppendReference(d distribution.Describable) error {
- descriptor := d.Descriptor()
-
- if err := descriptor.Digest.Validate(); err != nil {
- return err
- }
-
- mb.descriptors = append(mb.descriptors, descriptor)
- return nil
-}
-
-// References returns the current references added to this builder
-func (mb *configManifestBuilder) References() []distribution.Descriptor {
- return mb.descriptors
-}
-
-// MakeV1ConfigFromConfig creates an legacy V1 image config from image config JSON
-func MakeV1ConfigFromConfig(configJSON []byte, v1ID, parentV1ID string, throwaway bool) ([]byte, error) {
- // Top-level v1compatibility string should be a modified version of the
- // image config.
- var configAsMap map[string]*json.RawMessage
- if err := json.Unmarshal(configJSON, &configAsMap); err != nil {
- return nil, err
- }
-
- // Delete fields that didn't exist in old manifest
- delete(configAsMap, "rootfs")
- delete(configAsMap, "history")
- configAsMap["id"] = rawJSON(v1ID)
- if parentV1ID != "" {
- configAsMap["parent"] = rawJSON(parentV1ID)
- }
- if throwaway {
- configAsMap["throwaway"] = rawJSON(true)
- }
-
- return json.Marshal(configAsMap)
-}
-
-func rawJSON(value interface{}) *json.RawMessage {
- jsonval, err := json.Marshal(value)
- if err != nil {
- return nil
- }
- return (*json.RawMessage)(&jsonval)
-}
diff --git a/vendor/github.com/docker/distribution/manifest/schema1/manifest.go b/vendor/github.com/docker/distribution/manifest/schema1/manifest.go
deleted file mode 100644
index 65042a75..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema1/manifest.go
+++ /dev/null
@@ -1,184 +0,0 @@
-package schema1
-
-import (
- "encoding/json"
- "fmt"
-
- "github.com/docker/distribution"
- "github.com/docker/distribution/manifest"
- "github.com/docker/libtrust"
- "github.com/opencontainers/go-digest"
-)
-
-const (
- // MediaTypeManifest specifies the mediaType for the current version. Note
- // that for schema version 1, the the media is optionally "application/json".
- MediaTypeManifest = "application/vnd.docker.distribution.manifest.v1+json"
- // MediaTypeSignedManifest specifies the mediatype for current SignedManifest version
- MediaTypeSignedManifest = "application/vnd.docker.distribution.manifest.v1+prettyjws"
- // MediaTypeManifestLayer specifies the media type for manifest layers
- MediaTypeManifestLayer = "application/vnd.docker.container.image.rootfs.diff+x-gtar"
-)
-
-var (
- // SchemaVersion provides a pre-initialized version structure for this
- // packages version of the manifest.
- SchemaVersion = manifest.Versioned{
- SchemaVersion: 1,
- }
-)
-
-func init() {
- schema1Func := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
- sm := new(SignedManifest)
- err := sm.UnmarshalJSON(b)
- if err != nil {
- return nil, distribution.Descriptor{}, err
- }
-
- desc := distribution.Descriptor{
- Digest: digest.FromBytes(sm.Canonical),
- Size: int64(len(sm.Canonical)),
- MediaType: MediaTypeSignedManifest,
- }
- return sm, desc, err
- }
- err := distribution.RegisterManifestSchema(MediaTypeSignedManifest, schema1Func)
- if err != nil {
- panic(fmt.Sprintf("Unable to register manifest: %s", err))
- }
- err = distribution.RegisterManifestSchema("", schema1Func)
- if err != nil {
- panic(fmt.Sprintf("Unable to register manifest: %s", err))
- }
- err = distribution.RegisterManifestSchema("application/json", schema1Func)
- if err != nil {
- panic(fmt.Sprintf("Unable to register manifest: %s", err))
- }
-}
-
-// FSLayer is a container struct for BlobSums defined in an image manifest
-type FSLayer struct {
- // BlobSum is the tarsum of the referenced filesystem image layer
- BlobSum digest.Digest `json:"blobSum"`
-}
-
-// History stores unstructured v1 compatibility information
-type History struct {
- // V1Compatibility is the raw v1 compatibility information
- V1Compatibility string `json:"v1Compatibility"`
-}
-
-// Manifest provides the base accessible fields for working with V2 image
-// format in the registry.
-type Manifest struct {
- manifest.Versioned
-
- // Name is the name of the image's repository
- Name string `json:"name"`
-
- // Tag is the tag of the image specified by this manifest
- Tag string `json:"tag"`
-
- // Architecture is the host architecture on which this image is intended to
- // run
- Architecture string `json:"architecture"`
-
- // FSLayers is a list of filesystem layer blobSums contained in this image
- FSLayers []FSLayer `json:"fsLayers"`
-
- // History is a list of unstructured historical data for v1 compatibility
- History []History `json:"history"`
-}
-
-// SignedManifest provides an envelope for a signed image manifest, including
-// the format sensitive raw bytes.
-type SignedManifest struct {
- Manifest
-
- // Canonical is the canonical byte representation of the ImageManifest,
- // without any attached signatures. The manifest byte
- // representation cannot change or it will have to be re-signed.
- Canonical []byte `json:"-"`
-
- // all contains the byte representation of the Manifest including signatures
- // and is returned by Payload()
- all []byte
-}
-
-// UnmarshalJSON populates a new SignedManifest struct from JSON data.
-func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
- sm.all = make([]byte, len(b), len(b))
- // store manifest and signatures in all
- copy(sm.all, b)
-
- jsig, err := libtrust.ParsePrettySignature(b, "signatures")
- if err != nil {
- return err
- }
-
- // Resolve the payload in the manifest.
- bytes, err := jsig.Payload()
- if err != nil {
- return err
- }
-
- // sm.Canonical stores the canonical manifest JSON
- sm.Canonical = make([]byte, len(bytes), len(bytes))
- copy(sm.Canonical, bytes)
-
- // Unmarshal canonical JSON into Manifest object
- var manifest Manifest
- if err := json.Unmarshal(sm.Canonical, &manifest); err != nil {
- return err
- }
-
- sm.Manifest = manifest
-
- return nil
-}
-
-// References returnes the descriptors of this manifests references
-func (sm SignedManifest) References() []distribution.Descriptor {
- dependencies := make([]distribution.Descriptor, len(sm.FSLayers))
- for i, fsLayer := range sm.FSLayers {
- dependencies[i] = distribution.Descriptor{
- MediaType: "application/vnd.docker.container.image.rootfs.diff+x-gtar",
- Digest: fsLayer.BlobSum,
- }
- }
-
- return dependencies
-
-}
-
-// MarshalJSON returns the contents of raw. If Raw is nil, marshals the inner
-// contents. Applications requiring a marshaled signed manifest should simply
-// use Raw directly, since the the content produced by json.Marshal will be
-// compacted and will fail signature checks.
-func (sm *SignedManifest) MarshalJSON() ([]byte, error) {
- if len(sm.all) > 0 {
- return sm.all, nil
- }
-
- // If the raw data is not available, just dump the inner content.
- return json.Marshal(&sm.Manifest)
-}
-
-// Payload returns the signed content of the signed manifest.
-func (sm SignedManifest) Payload() (string, []byte, error) {
- return MediaTypeSignedManifest, sm.all, nil
-}
-
-// Signatures returns the signatures as provided by
-// (*libtrust.JSONSignature).Signatures. The byte slices are opaque jws
-// signatures.
-func (sm *SignedManifest) Signatures() ([][]byte, error) {
- jsig, err := libtrust.ParsePrettySignature(sm.all, "signatures")
- if err != nil {
- return nil, err
- }
-
- // Resolve the payload in the manifest.
- return jsig.Signatures()
-}
diff --git a/vendor/github.com/docker/distribution/manifest/schema1/reference_builder.go b/vendor/github.com/docker/distribution/manifest/schema1/reference_builder.go
deleted file mode 100644
index a4f6032c..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema1/reference_builder.go
+++ /dev/null
@@ -1,98 +0,0 @@
-package schema1
-
-import (
- "context"
- "errors"
- "fmt"
-
- "github.com/docker/distribution"
- "github.com/docker/distribution/manifest"
- "github.com/docker/distribution/reference"
- "github.com/docker/libtrust"
- "github.com/opencontainers/go-digest"
-)
-
-// referenceManifestBuilder is a type for constructing manifests from schema1
-// dependencies.
-type referenceManifestBuilder struct {
- Manifest
- pk libtrust.PrivateKey
-}
-
-// NewReferenceManifestBuilder is used to build new manifests for the current
-// schema version using schema1 dependencies.
-func NewReferenceManifestBuilder(pk libtrust.PrivateKey, ref reference.Named, architecture string) distribution.ManifestBuilder {
- tag := ""
- if tagged, isTagged := ref.(reference.Tagged); isTagged {
- tag = tagged.Tag()
- }
-
- return &referenceManifestBuilder{
- Manifest: Manifest{
- Versioned: manifest.Versioned{
- SchemaVersion: 1,
- },
- Name: ref.Name(),
- Tag: tag,
- Architecture: architecture,
- },
- pk: pk,
- }
-}
-
-func (mb *referenceManifestBuilder) Build(ctx context.Context) (distribution.Manifest, error) {
- m := mb.Manifest
- if len(m.FSLayers) == 0 {
- return nil, errors.New("cannot build manifest with zero layers or history")
- }
-
- m.FSLayers = make([]FSLayer, len(mb.Manifest.FSLayers))
- m.History = make([]History, len(mb.Manifest.History))
- copy(m.FSLayers, mb.Manifest.FSLayers)
- copy(m.History, mb.Manifest.History)
-
- return Sign(&m, mb.pk)
-}
-
-// AppendReference adds a reference to the current ManifestBuilder
-func (mb *referenceManifestBuilder) AppendReference(d distribution.Describable) error {
- r, ok := d.(Reference)
- if !ok {
- return fmt.Errorf("Unable to add non-reference type to v1 builder")
- }
-
- // Entries need to be prepended
- mb.Manifest.FSLayers = append([]FSLayer{{BlobSum: r.Digest}}, mb.Manifest.FSLayers...)
- mb.Manifest.History = append([]History{r.History}, mb.Manifest.History...)
- return nil
-
-}
-
-// References returns the current references added to this builder
-func (mb *referenceManifestBuilder) References() []distribution.Descriptor {
- refs := make([]distribution.Descriptor, len(mb.Manifest.FSLayers))
- for i := range mb.Manifest.FSLayers {
- layerDigest := mb.Manifest.FSLayers[i].BlobSum
- history := mb.Manifest.History[i]
- ref := Reference{layerDigest, 0, history}
- refs[i] = ref.Descriptor()
- }
- return refs
-}
-
-// Reference describes a manifest v2, schema version 1 dependency.
-// An FSLayer associated with a history entry.
-type Reference struct {
- Digest digest.Digest
- Size int64 // if we know it, set it for the descriptor.
- History History
-}
-
-// Descriptor describes a reference
-func (r Reference) Descriptor() distribution.Descriptor {
- return distribution.Descriptor{
- MediaType: MediaTypeManifestLayer,
- Digest: r.Digest,
- Size: r.Size,
- }
-}
diff --git a/vendor/github.com/docker/distribution/manifest/schema1/sign.go b/vendor/github.com/docker/distribution/manifest/schema1/sign.go
deleted file mode 100644
index c862dd81..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema1/sign.go
+++ /dev/null
@@ -1,68 +0,0 @@
-package schema1
-
-import (
- "crypto/x509"
- "encoding/json"
-
- "github.com/docker/libtrust"
-)
-
-// Sign signs the manifest with the provided private key, returning a
-// SignedManifest. This typically won't be used within the registry, except
-// for testing.
-func Sign(m *Manifest, pk libtrust.PrivateKey) (*SignedManifest, error) {
- p, err := json.MarshalIndent(m, "", " ")
- if err != nil {
- return nil, err
- }
-
- js, err := libtrust.NewJSONSignature(p)
- if err != nil {
- return nil, err
- }
-
- if err := js.Sign(pk); err != nil {
- return nil, err
- }
-
- pretty, err := js.PrettySignature("signatures")
- if err != nil {
- return nil, err
- }
-
- return &SignedManifest{
- Manifest: *m,
- all: pretty,
- Canonical: p,
- }, nil
-}
-
-// SignWithChain signs the manifest with the given private key and x509 chain.
-// The public key of the first element in the chain must be the public key
-// corresponding with the sign key.
-func SignWithChain(m *Manifest, key libtrust.PrivateKey, chain []*x509.Certificate) (*SignedManifest, error) {
- p, err := json.MarshalIndent(m, "", " ")
- if err != nil {
- return nil, err
- }
-
- js, err := libtrust.NewJSONSignature(p)
- if err != nil {
- return nil, err
- }
-
- if err := js.SignWithChain(key, chain); err != nil {
- return nil, err
- }
-
- pretty, err := js.PrettySignature("signatures")
- if err != nil {
- return nil, err
- }
-
- return &SignedManifest{
- Manifest: *m,
- all: pretty,
- Canonical: p,
- }, nil
-}
diff --git a/vendor/github.com/docker/distribution/manifest/schema1/verify.go b/vendor/github.com/docker/distribution/manifest/schema1/verify.go
deleted file mode 100644
index ef59065c..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema1/verify.go
+++ /dev/null
@@ -1,32 +0,0 @@
-package schema1
-
-import (
- "crypto/x509"
-
- "github.com/docker/libtrust"
- "github.com/sirupsen/logrus"
-)
-
-// Verify verifies the signature of the signed manifest returning the public
-// keys used during signing.
-func Verify(sm *SignedManifest) ([]libtrust.PublicKey, error) {
- js, err := libtrust.ParsePrettySignature(sm.all, "signatures")
- if err != nil {
- logrus.WithField("err", err).Debugf("(*SignedManifest).Verify")
- return nil, err
- }
-
- return js.Verify()
-}
-
-// VerifyChains verifies the signature of the signed manifest against the
-// certificate pool returning the list of verified chains. Signatures without
-// an x509 chain are not checked.
-func VerifyChains(sm *SignedManifest, ca *x509.CertPool) ([][]*x509.Certificate, error) {
- js, err := libtrust.ParsePrettySignature(sm.all, "signatures")
- if err != nil {
- return nil, err
- }
-
- return js.VerifyChains(ca)
-}
diff --git a/vendor/github.com/docker/distribution/manifest/schema2/builder.go b/vendor/github.com/docker/distribution/manifest/schema2/builder.go
deleted file mode 100644
index 3facaae6..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema2/builder.go
+++ /dev/null
@@ -1,85 +0,0 @@
-package schema2
-
-import (
- "context"
-
- "github.com/docker/distribution"
- "github.com/opencontainers/go-digest"
-)
-
-// builder is a type for constructing manifests.
-type builder struct {
- // bs is a BlobService used to publish the configuration blob.
- bs distribution.BlobService
-
- // configMediaType is media type used to describe configuration
- configMediaType string
-
- // configJSON references
- configJSON []byte
-
- // dependencies is a list of descriptors that gets built by successive
- // calls to AppendReference. In case of image configuration these are layers.
- dependencies []distribution.Descriptor
-}
-
-// NewManifestBuilder is used to build new manifests for the current schema
-// version. It takes a BlobService so it can publish the configuration blob
-// as part of the Build process.
-func NewManifestBuilder(bs distribution.BlobService, configMediaType string, configJSON []byte) distribution.ManifestBuilder {
- mb := &builder{
- bs: bs,
- configMediaType: configMediaType,
- configJSON: make([]byte, len(configJSON)),
- }
- copy(mb.configJSON, configJSON)
-
- return mb
-}
-
-// Build produces a final manifest from the given references.
-func (mb *builder) Build(ctx context.Context) (distribution.Manifest, error) {
- m := Manifest{
- Versioned: SchemaVersion,
- Layers: make([]distribution.Descriptor, len(mb.dependencies)),
- }
- copy(m.Layers, mb.dependencies)
-
- configDigest := digest.FromBytes(mb.configJSON)
-
- var err error
- m.Config, err = mb.bs.Stat(ctx, configDigest)
- switch err {
- case nil:
- // Override MediaType, since Put always replaces the specified media
- // type with application/octet-stream in the descriptor it returns.
- m.Config.MediaType = mb.configMediaType
- return FromStruct(m)
- case distribution.ErrBlobUnknown:
- // nop
- default:
- return nil, err
- }
-
- // Add config to the blob store
- m.Config, err = mb.bs.Put(ctx, mb.configMediaType, mb.configJSON)
- // Override MediaType, since Put always replaces the specified media
- // type with application/octet-stream in the descriptor it returns.
- m.Config.MediaType = mb.configMediaType
- if err != nil {
- return nil, err
- }
-
- return FromStruct(m)
-}
-
-// AppendReference adds a reference to the current ManifestBuilder.
-func (mb *builder) AppendReference(d distribution.Describable) error {
- mb.dependencies = append(mb.dependencies, d.Descriptor())
- return nil
-}
-
-// References returns the current references added to this builder.
-func (mb *builder) References() []distribution.Descriptor {
- return mb.dependencies
-}
diff --git a/vendor/github.com/docker/distribution/manifest/schema2/manifest.go b/vendor/github.com/docker/distribution/manifest/schema2/manifest.go
deleted file mode 100644
index a2708c75..00000000
--- a/vendor/github.com/docker/distribution/manifest/schema2/manifest.go
+++ /dev/null
@@ -1,138 +0,0 @@
-package schema2
-
-import (
- "encoding/json"
- "errors"
- "fmt"
-
- "github.com/docker/distribution"
- "github.com/docker/distribution/manifest"
- "github.com/opencontainers/go-digest"
-)
-
-const (
- // MediaTypeManifest specifies the mediaType for the current version.
- MediaTypeManifest = "application/vnd.docker.distribution.manifest.v2+json"
-
- // MediaTypeImageConfig specifies the mediaType for the image configuration.
- MediaTypeImageConfig = "application/vnd.docker.container.image.v1+json"
-
- // MediaTypePluginConfig specifies the mediaType for plugin configuration.
- MediaTypePluginConfig = "application/vnd.docker.plugin.v1+json"
-
- // MediaTypeLayer is the mediaType used for layers referenced by the
- // manifest.
- MediaTypeLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
-
- // MediaTypeForeignLayer is the mediaType used for layers that must be
- // downloaded from foreign URLs.
- MediaTypeForeignLayer = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
-
- // MediaTypeUncompressedLayer is the mediaType used for layers which
- // are not compressed.
- MediaTypeUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
-)
-
-var (
- // SchemaVersion provides a pre-initialized version structure for this
- // packages version of the manifest.
- SchemaVersion = manifest.Versioned{
- SchemaVersion: 2,
- MediaType: MediaTypeManifest,
- }
-)
-
-func init() {
- schema2Func := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
- m := new(DeserializedManifest)
- err := m.UnmarshalJSON(b)
- if err != nil {
- return nil, distribution.Descriptor{}, err
- }
-
- dgst := digest.FromBytes(b)
- return m, distribution.Descriptor{Digest: dgst, Size: int64(len(b)), MediaType: MediaTypeManifest}, err
- }
- err := distribution.RegisterManifestSchema(MediaTypeManifest, schema2Func)
- if err != nil {
- panic(fmt.Sprintf("Unable to register manifest: %s", err))
- }
-}
-
-// Manifest defines a schema2 manifest.
-type Manifest struct {
- manifest.Versioned
-
- // Config references the image configuration as a blob.
- Config distribution.Descriptor `json:"config"`
-
- // Layers lists descriptors for the layers referenced by the
- // configuration.
- Layers []distribution.Descriptor `json:"layers"`
-}
-
-// References returnes the descriptors of this manifests references.
-func (m Manifest) References() []distribution.Descriptor {
- references := make([]distribution.Descriptor, 0, 1+len(m.Layers))
- references = append(references, m.Config)
- references = append(references, m.Layers...)
- return references
-}
-
-// Target returns the target of this signed manifest.
-func (m Manifest) Target() distribution.Descriptor {
- return m.Config
-}
-
-// DeserializedManifest wraps Manifest with a copy of the original JSON.
-// It satisfies the distribution.Manifest interface.
-type DeserializedManifest struct {
- Manifest
-
- // canonical is the canonical byte representation of the Manifest.
- canonical []byte
-}
-
-// FromStruct takes a Manifest structure, marshals it to JSON, and returns a
-// DeserializedManifest which contains the manifest and its JSON representation.
-func FromStruct(m Manifest) (*DeserializedManifest, error) {
- var deserialized DeserializedManifest
- deserialized.Manifest = m
-
- var err error
- deserialized.canonical, err = json.MarshalIndent(&m, "", " ")
- return &deserialized, err
-}
-
-// UnmarshalJSON populates a new Manifest struct from JSON data.
-func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
- m.canonical = make([]byte, len(b), len(b))
- // store manifest in canonical
- copy(m.canonical, b)
-
- // Unmarshal canonical JSON into Manifest object
- var manifest Manifest
- if err := json.Unmarshal(m.canonical, &manifest); err != nil {
- return err
- }
-
- m.Manifest = manifest
-
- return nil
-}
-
-// MarshalJSON returns the contents of canonical. If canonical is empty,
-// marshals the inner contents.
-func (m *DeserializedManifest) MarshalJSON() ([]byte, error) {
- if len(m.canonical) > 0 {
- return m.canonical, nil
- }
-
- return nil, errors.New("JSON representation not initialized in DeserializedManifest")
-}
-
-// Payload returns the raw content of the manifest. The contents can be used to
-// calculate the content identifier.
-func (m DeserializedManifest) Payload() (string, []byte, error) {
- return m.MediaType, m.canonical, nil
-}
diff --git a/vendor/github.com/docker/distribution/manifest/versioned.go b/vendor/github.com/docker/distribution/manifest/versioned.go
deleted file mode 100644
index caa6b14e..00000000
--- a/vendor/github.com/docker/distribution/manifest/versioned.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package manifest
-
-// Versioned provides a struct with the manifest schemaVersion and mediaType.
-// Incoming content with unknown schema version can be decoded against this
-// struct to check the version.
-type Versioned struct {
- // SchemaVersion is the image manifest schema that this image follows
- SchemaVersion int `json:"schemaVersion"`
-
- // MediaType is the media type of this schema.
- MediaType string `json:"mediaType,omitempty"`
-}
diff --git a/vendor/github.com/docker/distribution/manifests.go b/vendor/github.com/docker/distribution/manifests.go
deleted file mode 100644
index 1816baea..00000000
--- a/vendor/github.com/docker/distribution/manifests.go
+++ /dev/null
@@ -1,125 +0,0 @@
-package distribution
-
-import (
- "context"
- "fmt"
- "mime"
-
- "github.com/opencontainers/go-digest"
-)
-
-// Manifest represents a registry object specifying a set of
-// references and an optional target
-type Manifest interface {
- // References returns a list of objects which make up this manifest.
- // A reference is anything which can be represented by a
- // distribution.Descriptor. These can consist of layers, resources or other
- // manifests.
- //
- // While no particular order is required, implementations should return
- // them from highest to lowest priority. For example, one might want to
- // return the base layer before the top layer.
- References() []Descriptor
-
- // Payload provides the serialized format of the manifest, in addition to
- // the media type.
- Payload() (mediaType string, payload []byte, err error)
-}
-
-// ManifestBuilder creates a manifest allowing one to include dependencies.
-// Instances can be obtained from a version-specific manifest package. Manifest
-// specific data is passed into the function which creates the builder.
-type ManifestBuilder interface {
- // Build creates the manifest from his builder.
- Build(ctx context.Context) (Manifest, error)
-
- // References returns a list of objects which have been added to this
- // builder. The dependencies are returned in the order they were added,
- // which should be from base to head.
- References() []Descriptor
-
- // AppendReference includes the given object in the manifest after any
- // existing dependencies. If the add fails, such as when adding an
- // unsupported dependency, an error may be returned.
- //
- // The destination of the reference is dependent on the manifest type and
- // the dependency type.
- AppendReference(dependency Describable) error
-}
-
-// ManifestService describes operations on image manifests.
-type ManifestService interface {
- // Exists returns true if the manifest exists.
- Exists(ctx context.Context, dgst digest.Digest) (bool, error)
-
- // Get retrieves the manifest specified by the given digest
- Get(ctx context.Context, dgst digest.Digest, options ...ManifestServiceOption) (Manifest, error)
-
- // Put creates or updates the given manifest returning the manifest digest
- Put(ctx context.Context, manifest Manifest, options ...ManifestServiceOption) (digest.Digest, error)
-
- // Delete removes the manifest specified by the given digest. Deleting
- // a manifest that doesn't exist will return ErrManifestNotFound
- Delete(ctx context.Context, dgst digest.Digest) error
-}
-
-// ManifestEnumerator enables iterating over manifests
-type ManifestEnumerator interface {
- // Enumerate calls ingester for each manifest.
- Enumerate(ctx context.Context, ingester func(digest.Digest) error) error
-}
-
-// Describable is an interface for descriptors
-type Describable interface {
- Descriptor() Descriptor
-}
-
-// ManifestMediaTypes returns the supported media types for manifests.
-func ManifestMediaTypes() (mediaTypes []string) {
- for t := range mappings {
- if t != "" {
- mediaTypes = append(mediaTypes, t)
- }
- }
- return
-}
-
-// UnmarshalFunc implements manifest unmarshalling a given MediaType
-type UnmarshalFunc func([]byte) (Manifest, Descriptor, error)
-
-var mappings = make(map[string]UnmarshalFunc, 0)
-
-// UnmarshalManifest looks up manifest unmarshal functions based on
-// MediaType
-func UnmarshalManifest(ctHeader string, p []byte) (Manifest, Descriptor, error) {
- // Need to look up by the actual media type, not the raw contents of
- // the header. Strip semicolons and anything following them.
- var mediaType string
- if ctHeader != "" {
- var err error
- mediaType, _, err = mime.ParseMediaType(ctHeader)
- if err != nil {
- return nil, Descriptor{}, err
- }
- }
-
- unmarshalFunc, ok := mappings[mediaType]
- if !ok {
- unmarshalFunc, ok = mappings[""]
- if !ok {
- return nil, Descriptor{}, fmt.Errorf("unsupported manifest media type and no default available: %s", mediaType)
- }
- }
-
- return unmarshalFunc(p)
-}
-
-// RegisterManifestSchema registers an UnmarshalFunc for a given schema type. This
-// should be called from specific
-func RegisterManifestSchema(mediaType string, u UnmarshalFunc) error {
- if _, ok := mappings[mediaType]; ok {
- return fmt.Errorf("manifest media type registration would overwrite existing: %s", mediaType)
- }
- mappings[mediaType] = u
- return nil
-}
diff --git a/vendor/github.com/docker/distribution/reference/helpers.go b/vendor/github.com/docker/distribution/reference/helpers.go
deleted file mode 100644
index 978df7ea..00000000
--- a/vendor/github.com/docker/distribution/reference/helpers.go
+++ /dev/null
@@ -1,42 +0,0 @@
-package reference
-
-import "path"
-
-// IsNameOnly returns true if reference only contains a repo name.
-func IsNameOnly(ref Named) bool {
- if _, ok := ref.(NamedTagged); ok {
- return false
- }
- if _, ok := ref.(Canonical); ok {
- return false
- }
- return true
-}
-
-// FamiliarName returns the familiar name string
-// for the given named, familiarizing if needed.
-func FamiliarName(ref Named) string {
- if nn, ok := ref.(normalizedNamed); ok {
- return nn.Familiar().Name()
- }
- return ref.Name()
-}
-
-// FamiliarString returns the familiar string representation
-// for the given reference, familiarizing if needed.
-func FamiliarString(ref Reference) string {
- if nn, ok := ref.(normalizedNamed); ok {
- return nn.Familiar().String()
- }
- return ref.String()
-}
-
-// FamiliarMatch reports whether ref matches the specified pattern.
-// See https://godoc.org/path#Match for supported patterns.
-func FamiliarMatch(pattern string, ref Reference) (bool, error) {
- matched, err := path.Match(pattern, FamiliarString(ref))
- if namedRef, isNamed := ref.(Named); isNamed && !matched {
- matched, _ = path.Match(pattern, FamiliarName(namedRef))
- }
- return matched, err
-}
diff --git a/vendor/github.com/docker/distribution/reference/normalize.go b/vendor/github.com/docker/distribution/reference/normalize.go
deleted file mode 100644
index 2d71fc5e..00000000
--- a/vendor/github.com/docker/distribution/reference/normalize.go
+++ /dev/null
@@ -1,170 +0,0 @@
-package reference
-
-import (
- "errors"
- "fmt"
- "strings"
-
- "github.com/docker/distribution/digestset"
- "github.com/opencontainers/go-digest"
-)
-
-var (
- legacyDefaultDomain = "index.docker.io"
- defaultDomain = "docker.io"
- officialRepoName = "library"
- defaultTag = "latest"
-)
-
-// normalizedNamed represents a name which has been
-// normalized and has a familiar form. A familiar name
-// is what is used in Docker UI. An example normalized
-// name is "docker.io/library/ubuntu" and corresponding
-// familiar name of "ubuntu".
-type normalizedNamed interface {
- Named
- Familiar() Named
-}
-
-// ParseNormalizedNamed parses a string into a named reference
-// transforming a familiar name from Docker UI to a fully
-// qualified reference. If the value may be an identifier
-// use ParseAnyReference.
-func ParseNormalizedNamed(s string) (Named, error) {
- if ok := anchoredIdentifierRegexp.MatchString(s); ok {
- return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
- }
- domain, remainder := splitDockerDomain(s)
- var remoteName string
- if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
- remoteName = remainder[:tagSep]
- } else {
- remoteName = remainder
- }
- if strings.ToLower(remoteName) != remoteName {
- return nil, errors.New("invalid reference format: repository name must be lowercase")
- }
-
- ref, err := Parse(domain + "/" + remainder)
- if err != nil {
- return nil, err
- }
- named, isNamed := ref.(Named)
- if !isNamed {
- return nil, fmt.Errorf("reference %s has no name", ref.String())
- }
- return named, nil
-}
-
-// splitDockerDomain splits a repository name to domain and remotename string.
-// If no valid domain is found, the default domain is used. Repository name
-// needs to be already validated before.
-func splitDockerDomain(name string) (domain, remainder string) {
- i := strings.IndexRune(name, '/')
- if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
- domain, remainder = defaultDomain, name
- } else {
- domain, remainder = name[:i], name[i+1:]
- }
- if domain == legacyDefaultDomain {
- domain = defaultDomain
- }
- if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
- remainder = officialRepoName + "/" + remainder
- }
- return
-}
-
-// familiarizeName returns a shortened version of the name familiar
-// to to the Docker UI. Familiar names have the default domain
-// "docker.io" and "library/" repository prefix removed.
-// For example, "docker.io/library/redis" will have the familiar
-// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
-// Returns a familiarized named only reference.
-func familiarizeName(named namedRepository) repository {
- repo := repository{
- domain: named.Domain(),
- path: named.Path(),
- }
-
- if repo.domain == defaultDomain {
- repo.domain = ""
- // Handle official repositories which have the pattern "library/