Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tuf: improve TUF client concurrency and caching #1953

Merged
merged 5 commits into from
Jun 7, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
117 changes: 55 additions & 62 deletions pkg/cosign/tuf/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ const (
var (
// singletonTUF holds a single instance of TUF that will get reused on
// subsequent invocations of initializeTUF
singletonTUF *TUF
singletonTUFMu sync.Mutex
singletonTUF *TUF
singletonTUFOnce = new(sync.Once)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, I actually didn't know that go had allocation via new.

I don't mind this approach, but I found that someone else had the same problem and made sync but with Reset - https://github.com/matryer/resync

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah i saw this as well and ideally wanted to avoid a totally new dependency just for testing -- I'm not sure if there's a big downside to this approach though.

)

var GetRemoteRoot = func() string {
Expand Down Expand Up @@ -109,9 +109,7 @@ type remoteCache struct {
}

func resetForTests() {
singletonTUFMu.Lock()
singletonTUF = nil
singletonTUFMu.Unlock()
singletonTUFOnce = new(sync.Once)
}

func getExpiration(metadata []byte) (*time.Time, error) {
Expand Down Expand Up @@ -235,66 +233,62 @@ func GetRootStatus(ctx context.Context) (*RootStatus, error) {
// targets in a targets/ subfolder.
// * forceUpdate: indicates checking the remote for an update, even when the local
// timestamp.json is up to date.
func initializeTUF(ctx context.Context, mirror string, root []byte, embedded fs.FS, forceUpdate bool) (*TUF, error) {
singletonTUFMu.Lock()
defer singletonTUFMu.Unlock()
if singletonTUF != nil {
return singletonTUF, nil
}
func initializeTUF(ctx context.Context, mirror string, root []byte, embedded fs.FS, forceUpdate bool) *TUF {
singletonTUFOnce.Do(func() {
t := &TUF{
mirror: mirror,
embedded: embedded,
}

t := &TUF{
mirror: mirror,
embedded: embedded,
}
t.targets = newFileImpl()
var err error
t.local, err = newLocalStore()
if err != nil {
panic(err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Darn, is this the consequence of sync.Once? If so, this isn't viable imo, we shouldn't add a panic into the client because if a server starts depending on it, it opens up an opportunity for query of deaths. Is there a way to return an error? If not, what you had before would be fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on not panicing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could do a global variable - https://stackoverflow.com/questions/42069615/too-many-arguments-to-return-error

Just ran into this same issue in fulcioroots

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the global!

}

t.targets = newFileImpl()
var err error
t.local, err = newLocalStore()
if err != nil {
return nil, err
}
t.remote, err = remoteFromMirror(ctx, t.mirror)
if err != nil {
panic(err)
}

t.remote, err = remoteFromMirror(ctx, t.mirror)
if err != nil {
return nil, err
}
t.client = client.NewClient(t.local, t.remote)

t.client = client.NewClient(t.local, t.remote)
trustedMeta, err := t.local.GetMeta()
if err != nil {
panic(fmt.Errorf("getting trusted meta: %w", err))
}

trustedMeta, err := t.local.GetMeta()
if err != nil {
return nil, fmt.Errorf("getting trusted meta: %w", err)
}
// If the caller does not supply a root, then either use the root in the local store
// or default to the embedded one.
if root == nil {
root, err = getRoot(trustedMeta, t.embedded)
if err != nil {
panic(fmt.Errorf("getting trusted root: %w", err))
}
}

// If the caller does not supply a root, then either use the root in the local store
// or default to the embedded one.
if root == nil {
root, err = getRoot(trustedMeta, t.embedded)
if err != nil {
return nil, fmt.Errorf("getting trusted root: %w", err)
if err := t.client.InitLocal(root); err != nil {
panic(fmt.Errorf("unable to initialize client, local cache may be corrupt: %w", err))
}
}

if err := t.client.InitLocal(root); err != nil {
return nil, fmt.Errorf("unable to initialize client, local cache may be corrupt: %w", err)
}
// We may already have an up-to-date local store! Check to see if it needs to be updated.
trustedTimestamp, ok := trustedMeta["timestamp.json"]
if ok && !isExpiredTimestamp(trustedTimestamp) && !forceUpdate {
// We're golden so stash the TUF object for later use
singletonTUF = t
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this returning nil?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's returning out of the sync.Once func, but we still return singletonTUF from initializeTUF

}

// Update if local is not populated or out of date.
if err := t.updateMetadataAndDownloadTargets(); err != nil {
panic(fmt.Errorf("updating local metadata and targets: %w", err))
}

// We may already have an up-to-date local store! Check to see if it needs to be updated.
trustedTimestamp, ok := trustedMeta["timestamp.json"]
if ok && !isExpiredTimestamp(trustedTimestamp) && !forceUpdate {
// We're golden so stash the TUF object for later use
singletonTUF = t
return t, nil
}

// Update if local is not populated or out of date.
if err := t.updateMetadataAndDownloadTargets(); err != nil {
return nil, fmt.Errorf("updating local metadata and targets: %w", err)
}

// We're golden so stash the TUF object for later use
singletonTUF = t
return t, err
})
return singletonTUF
}

func NewFromEnv(ctx context.Context) (*TUF, error) {
Expand All @@ -309,14 +303,12 @@ func NewFromEnv(ctx context.Context) (*TUF, error) {
}

// Initializes a new TUF object from the local cache or defaults.
return initializeTUF(ctx, mirror, nil, GetEmbedded(), false)
return initializeTUF(ctx, mirror, nil, GetEmbedded(), false), nil
}

func Initialize(ctx context.Context, mirror string, root []byte) error {
asraa marked this conversation as resolved.
Show resolved Hide resolved
// Initialize the client. Force an update.
if _, err := initializeTUF(ctx, mirror, root, GetEmbedded(), true); err != nil {
return err
}
// Initialize the client. Force an update with remote.
_ = initializeTUF(ctx, mirror, root, GetEmbedded(), true)

// Store the remote for later if we are caching.
if !noCache() {
Expand Down Expand Up @@ -445,7 +437,7 @@ func (t *TUF) updateClient() (data.TargetFiles, error) {
if noCache() {
return targets, nil
}
// Sync the in-memory local store to the on-disk cache.
// Sync the on-disk cache with the metadata from the in-memory store.
tufDB := filepath.FromSlash(filepath.Join(rootCacheDir(), "tuf.db"))
diskLocal, err := tuf_leveldbstore.FileLocalStore(tufDB)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this API expected to be used, or are we reaching into private APIs? I just wanna make sure this is maintainable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, FileLocalStore is meant to be used!

defer func() {
Expand Down Expand Up @@ -563,7 +555,7 @@ func cachedTargetsDir(cacheRoot string) string {
}

func syncLocalMeta(from client.LocalStore, to client.LocalStore) error {
// Otherwise populate the in-memory local store with data fetched from the cache.
// Copy trusted metadata in the from LocalStore into the to LocalStore.
tufLocalStoreMeta, err := from.GetMeta()
if err != nil {
return fmt.Errorf("getting metadata to sync: %w", err)
Expand Down Expand Up @@ -593,6 +585,7 @@ func newLocalStore() (client.LocalStore, error) {
if err != nil {
return nil, fmt.Errorf("creating cached local store: %w", err)
}
// Populate the in-memory local store with data fetched from the cache.
if err := syncLocalMeta(diskLocal, local); err != nil {
return nil, err
}
Expand Down Expand Up @@ -652,7 +645,7 @@ func (m *memoryCache) Get(p string) ([]byte, error) {
type diskCache struct {
// Base directory for accessing targets.
base string
// An in-memory map of targets that are kept in synced.
// An in-memory map of targets that are kept in sync.
memory *memoryCache
}

Expand Down
17 changes: 14 additions & 3 deletions pkg/cosign/tuf/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -216,16 +216,27 @@ func TestCustomRoot(t *testing.T) {
return true
}

if _, err = NewFromEnv(ctx); err == nil {
t.Errorf("expected expired timestamp from the remote")
}
// Force a panic error that the remote metadata is expired.
func() {
defer func() {
if r := recover(); r == nil {
t.Errorf("NewFromEnv with expired remote metadata should have panicked!")
}
}()
// This should cause a panic
if _, err = NewFromEnv(ctx); err == nil {
t.Errorf("expected expired timestamp from the remote")
}
}()

// Let internal TUF verification succeed normally now.
verify.IsExpired = oldIsExpired

// Update remote targets, issue a timestamp v2.
updateTufRepo(t, td, r, "foo1")

// Use newTuf and successfully get updated metadata using the cached remote location.
resetForTests()
tufObj, err = NewFromEnv(ctx)
if err != nil {
t.Fatal(err)
Expand Down
16 changes: 6 additions & 10 deletions pkg/cosign/verify_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ import (
"github.com/secure-systems-lab/go-securesystemslib/dsse"
"github.com/sigstore/cosign/internal/pkg/cosign/rekor/mock"
"github.com/sigstore/cosign/pkg/cosign/bundle"
ctuf "github.com/sigstore/cosign/pkg/cosign/tuf"
"github.com/sigstore/cosign/pkg/oci/static"
"github.com/sigstore/cosign/pkg/types"
"github.com/sigstore/cosign/test"
Expand Down Expand Up @@ -225,11 +224,6 @@ func TestVerifyImageSignatureWithNoChain(t *testing.T) {
if err != nil {
t.Fatalf("creating signer: %v", err)
}
testSigstoreRoot := ctuf.TestSigstoreRoot{
Rekor: sv,
FulcioCertificate: rootCert,
}
_, _ = ctuf.NewSigstoreTufRepo(t, testSigstoreRoot)

leafCert, privKey, _ := test.GenerateLeafCert("subject", "oidc-issuer", rootCert, rootKey)
pemLeaf := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: leafCert.Raw})
Expand All @@ -250,12 +244,14 @@ func TestVerifyImageSignatureWithNoChain(t *testing.T) {
opts := []static.Option{static.WithCertChain(pemLeaf, []byte{}), static.WithBundle(rekorBundle)}
ociSig, _ := static.NewSignature(payload, base64.StdEncoding.EncodeToString(signature), opts...)

// TODO(asraa): Re-enable passing test when Rekor public keys can be set in CheckOpts,
// instead of relying on the singleton TUF instance.
verified, err := VerifyImageSignature(context.TODO(), ociSig, v1.Hash{}, &CheckOpts{RootCerts: rootPool})
if err != nil {
t.Fatalf("unexpected error while verifying signature, expected no error, got %v", err)
if err == nil {
t.Fatalf("expected error due to custom Rekor public key")
}
if verified == false {
t.Fatalf("expected verified=true, got verified=false")
if verified == true {
t.Fatalf("expected verified=false, got verified=true")
}
}

Expand Down