You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We need to make sure that checks that expand subject sets do not get stuck in infinite loop in case one tuple closes a cycle with a SubjectSet to an already known object#relation in the same request context.
I think we should not analyze and prevent such to happen upfront at write time of the tuple since this analysis may result in very high latency since multiple database lookups would be required for each new tuple insertion.
Instead we can propagate context including already traversed object#relation subjects so inherent checks are deduplicated and cyclic behaviour is cut.
...except the “Leopard” secondary indexing system will already help to prevent such loop situation in the first place.
The text was updated successfully, but these errors were encountered:
I decided that the expand API has a required parameter depth that defines the maximum depth of the constructed tree. I think all clients can determine some depth that is required. In fact, you can interpret the trees like lists as well, so you can apply the same concepts as with list pagination. Only difference is that there is more than one "next page", i.e. one for every subject set leaf.
Is your feature request related to a problem? Please describe.
We need to make sure that checks that
expand
subject sets do not get stuck in infinite loop in case one tuple closes a cycle with a SubjectSet to an already knownobject#relation
in the same request context.I think we should not analyze and prevent such to happen upfront at write time of the tuple since this analysis may result in very high latency since multiple database lookups would be required for each new tuple insertion.
Instead we can propagate context including already traversed
object#relation
subjects so inherent checks are deduplicated and cyclic behaviour is cut....except the “Leopard” secondary indexing system will already help to prevent such loop situation in the first place.
The text was updated successfully, but these errors were encountered: