-
-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add mapping table for encoding subject into UUIDs #809
feat: Add mapping table for encoding subject into UUIDs #809
Conversation
@zepatrik the Cockroach migration test failed with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this is a good starting point but I think some requirements were not clearly defined in the issue. Now that I see the first implementation iteration, I clarified some parts 😉
@@ -0,0 +1,10 @@ | |||
CREATE TABLE keto_uuid_mappings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use https://github.com/gobuffalo/fizz as a DDL tool to create migrations. See other example create table definitions: https://github.com/ory/kratos/blob/c455fbf2f4792cb3fd2f846a5e699e837d8c98a4/persistence/sql/migrations/templates/20191100000001_identities.up.fizz
You can then run make migrations-render-replace
to render the fizz template to SQL files.
Further, we will need to migrate the existing relation tuple table as well. In case the ID in there already is a UUID (maybe using a regex to find those?) nothing has to be done. In case it is not, we will have to add an entry to this mapping table. That migration process needs good tests to ensure everything works. Migration tests are here: https://github.com/ory/keto/blob/fabf1a06d90139f6678fc7bc1703ee92a5fc9d69/internal/persistence/sql/migrations/migratest/migration_test.go
8966431
to
17c8489
Compare
Thanks for the review. I addressed most of the points (I think), but I am still not sure where the best place is to actually do the migration as well as add the business logic that inserts a UUID in the mapping table if a non-UUID subject ID should be inserted. |
c3ad030
to
f519a5c
Compare
@zepatrik friendly ping :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hperl thanks for addressing the feedback 👍 and sorry for the late reply 🙈
I have moved the mapping manager test to the right location, I think it is easier to show by commit than to explain 😉
This is now pretty much what I expected as the first draft.
For the migration, we would ideally implement that in SQL. If it gets too complicated there (especially for cross-DB compatibility), we can also do it in go. There is already one migration process, the SingleTableMigrator
that runs in go. You can give it a try in SQL if you want 😉
In all the handlers/... we would then change the subject and object fields to be one of subject_uuid
or subject_string
. In the latter case, we would apply the mapping. But I would do that in a separate follow up PR if you don't want to invest all that work.
Oh and btw, for that we will also need a string to UUID function, and therefore also an (unique) index on the string column. It would be great if you can add that in this PR.
@zepatrik, thanks for moving stuff to the correct place. It feels a little strange to have the test function defined outside of a As for the SQL migration, that would still go into 20220110200400_create-uuid-mapping-table.up.sql, correct? (btw, I quickly added the index in the migration) |
A new table was added that stores a mapping between an arbitraty string and a UUIDv5 of that string, namespaced by a namespace UUID that the user can set in the namespace configuration.
e915f87
to
bd9b2e0
Compare
We use this pattern for the persister tests only. Main reason being that we can avoid circular dependencies with that. Also, we don't usually spin up all databases for all tests, but only have this one persister test that uses ALL supported databases. And yes, you can try to add the migration in that file 😉 |
Having thought about writing the migration in SQL, I think Go is the better choice. The migration would need to parse a UUID that conforms to the Go implementation (https://pkg.go.dev/github.com/gofrs/uuid?utm_source=gopls#UUID.UnmarshalText) as well as generate UUIDs (and at least MySQL generates v1 UUIDs only, which are not random, see https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_uuid). With both generation and parsing being hard to do consistently across SQL dialects, I'd go with Go. @zepatrik, WDYT? |
6d3254d
to
78d153e
Compare
If mapping subject to UUID, why not increase the supported subject length as well? We're looking at using somewhat complex subject strings which would benefit by being larger then 64 characters. |
@rdurell sure, I can increase it to 1024. Do you have more information on typical subject ID sizes? |
c66f85b
to
2af3d91
Compare
2af3d91
to
fb7af04
Compare
Sounds reasonable 👍
Definitely, maybe we can even make the strings unbound? I know that at least postgres and cockroach don't really care. If you happen to store big strings, you will face the performance implications of that. But we should not force you to do that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, just some remarks on the migrator.
SubjectSetRelation: dbsql.NullString{String: "rel", Valid: true}, | ||
CommitTime: time.Now(), | ||
}, | ||
expectMapping: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expectMapping: false, | |
expectMapping: true, |
We can either use a For storing arbitrary large strings, I would not want to put an index on them or even lookup by the string representation then, but either:
Maybe we can quickly go over the last open points of the design in-band, I think we could quickly resolve them. |
afb5c7d
to
7438c28
Compare
4dcdfda
to
7438c28
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am really sorry for the long review time. This looks very good now.
IMO we could merge it as is and open new PRs to finish the feature? This would block us however on creating releases...
Alternatively, we can create a feature branch that we then open more PRs against and in the end rebase master on that.
assert.Equal(t, tc.rt, newRt) | ||
} | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to add a test case here that ensures the pagination works. There is a wrapper for the relation tuple manager where you can overwrite the default pagination:
keto/internal/relationtuple/definitions.go
Lines 644 to 660 in 1e15176
type ManagerWrapper struct { | |
Reg ManagerProvider | |
PageOpts []x.PaginationOptionSetter | |
RequestedPages []string | |
} | |
var ( | |
_ Manager = (*ManagerWrapper)(nil) | |
_ ManagerProvider = (*ManagerWrapper)(nil) | |
) | |
func NewManagerWrapper(_ *testing.T, reg ManagerProvider, options ...x.PaginationOptionSetter) *ManagerWrapper { | |
return &ManagerWrapper{ | |
Reg: reg, | |
PageOpts: options, | |
} | |
} |
You can initialize and use it in the test similar to
keto/internal/check/engine_test.go
Lines 34 to 42 in 1e15176
reg := driver.NewSqliteTestRegistry(t, false) | |
require.NoError(t, reg.Config().Set(config.KeyNamespaces, namespaces)) | |
mr := relationtuple.NewManagerWrapper(t, reg, pageOpts...) | |
return &deps{ | |
ManagerWrapper: mr, | |
configProvider: reg, | |
loggerProvider: reg, | |
} |
keto/internal/check/engine_test.go
Line 441 in 1e15176
t.Run("case=paginates", func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I added a test for pagination. I did not use the ManagerWrapper
, because I was not retrieving the tuples via the manager anyways, but I used that code as inspiration nonetheless 😉.
string_representation TEXT NOT NULL CHECK (string_representation <> ''), | ||
|
||
PRIMARY KEY (id) | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized that we of course have to change the column types in the relation tuple table to UUID as well. I am afraid that we have to create a new table and insert into that instead of the old one. Or do you have other ideas?
The migrator we use can theoretically be extended to add go migrations btw, I think that would definitely make sense. Then we also don't need the command 🤔
What do you think about that @aeneasr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating a new table might not be necessary. Assuming that object
and subject_id
already hold UUIDs, we can, in Postgres, do:
ALTER TABLE keto_relation_tuples ALTER object TYPE UUID using object::uuid;
(and similar for subject_id
). MySQL does not have a UUID type, so we will stick with VARCHAR(64)
. The same statement works in Cockroachdb when using SET enable_experimental_alter_column_type_general = true
.
I'm not a big fan of long-living feature branches, but I don't know the release process well enough. If we could prevent the migration from running, nothing user-visible changed. Is that an option? |
Agree, but we can only release master as is easily, so this would block us. You are right that currently it does not affect the previous code, but it will once we start integrating the mapping into handlers, ... |
Co-authored-by: Patrik <[email protected]>
Co-authored-by: Patrik <[email protected]>
With this change Keto now maps strings to UUIDv5 on the storage layer. This change allows unlimited strings to be used while maintaining good performance. Further, it reduces the likeliness of database hot-spots. The migration that applies this mapping might take some time, so please confirm that your migration strategy works for you. BREAKING CHANGE: `keto namespace migrate ...` commands were removed. To migrate from v0.6.0-alpha.1, please first migrate the legacy namespaces using v0.8.0-alpha.2 BREAKING CHANGE: The protobuf API was bumped to `v1alpha2`. Please upgrade your client dependency to that version. `v1alpha1` is still supported for now, but might be dropped soon. BREAKING CHANGE: Some payload keys are now (not) required anymore. The generated SDKs will likely have breaking changes. Co-authored-by: Patrik <[email protected]> Co-authored-by: hperl <[email protected]>
This PR contains first work towards supporting automatic subject/object encodings into UUIDs, as discussed in #792.
Related issue(s)
#792
Checklist
introduces a new feature.
contributing code guidelines.
vulnerability. If this pull request addresses a security. vulnerability, I
confirm that I got green light (please contact
[email protected]) from the maintainers to push
the changes.
works.
Future work
In order to complete the work for #792, the following items still need to be adressed (in this or future PRs):