Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: policy reload can trigger transient userspace failures with ENOMEM errors #38

Closed
stephensmalley opened this issue Feb 13, 2018 · 11 comments

Comments

@stephensmalley
Copy link
Member

As reported by Li Kun on selinux list, a policy reload can cause a transient failure in allocating a new context/SID, thereby breaking userspace, e.g. docker calls setexeccon(3) and this interleaves with a policy reload such that it fails with ENOMEM. This is due to sidtab_context_to_sid() returning ENOMEM when the sidtab has been shutdown. During policy reload, we shut down the sidtab to prevent creating allocating any new SIDs/contexts before we copy and remap all of the existing sidtab entries for the new policy. A short term fix for this would be to return EINTR in this case instead, which is already handled by libselinux and will cause the operation to be retried, allowing it to proceed once the policy switch has completed. A longer term fix is to rework the way policy reload occurs to avoid any need to shut down the sidtab.

@pcmoore pcmoore changed the title policy reload can trigger transient userspace failures with ENOMEM errors BUG: policy reload can trigger transient userspace failures with ENOMEM errors Feb 13, 2018
@opoplawski
Copy link

Possible downstream report - https://bugzilla.redhat.com/show_bug.cgi?id=1335986

@stephensmalley
Copy link
Member Author

Bug isn't open (at least not to me), so that isn't very helpful.

@pcmoore
Copy link
Member

pcmoore commented Oct 5, 2018

It's not open to me either.

It's also worth noting that downstream bugs don't always mean there is a problem upstream, especially when the downstream kernel has diverged significant from upstream.

@opoplawski
Copy link

Ah, sorry. If you have RedHat bugzilla logins I probably could add you to the cc list if desired. I can't make it public though.

@pcmoore
Copy link
Member

pcmoore commented Oct 5, 2018

I have a Red Hat bugzilla account for Fedora work, but this is something you should probably bring up with @WOnder93 if you haven't already.

@WOnder93
Copy link
Member

WOnder93 commented Oct 8, 2018

I found a simple reproducer for this bug:

while true; do load_policy; echo -n .; sleep 0.1; done &

for (( i = 0; i < 1024; i++ )); do
	runcon -l s0:c$i echo -n x || break
	# or:
	# chcon -l s0:c$i <some_file>
done

Unfortunately, the EINTR workaround suggested in the description won't work, because the error can be also triggered via setxattr(2), which cannot return EINTR.

@WOnder93
Copy link
Member

For the record, here is my initial take on fixing this problem:
https://lore.kernel.org/selinux/[email protected]/

Today, I posted another patchset with a cleaner (but more complex) solution that also improves performance of some sidtab operations as a bonus:
https://lore.kernel.org/selinux/[email protected]/

pcmoore pushed a commit that referenced this issue Dec 5, 2018
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: #38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>
@boazsapir
Copy link

I have a problem with logrotate and SELinux which looks like it is caused by the same bug.
In short, if the first run of logrotate after reboot coincides with an SELinux command like setsebool or semodule, there is a chance that the file /var/lib/logrotate/logrotate.status will be created as unlabeled_t instead of logrotate_var_lib_t. As a result logrotate always fails because it cannot access logrotate.status. I also get this error in the log:
Jan 30 10:48:02 ip-10-40-1-233 kernel: SELinux: inode_doinit_with_dentry: context_to_sid(system_u:object_r:logrotate_var_lib_t:s0) returned 12 for dev=xvda2 ino=3335
In which version of RedHat can I expect the fix?

@WOnder93
Copy link
Member

@boazsapir: Yes, the symptoms definitely check out, but for Red Hat specific questions please use official RH channels (there is a RHEL-7 bugzilla open [1] or you might want to open a customer case). I can't give you any official commitment, although I can say that work is ongoing to backport the fix to the next minor version of RHEL-7, so cross your fingers ;)

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1335986

@boazsapir
Copy link

@WOnder93 thank you for the update, I will open a case with RedHat

@pcmoore
Copy link
Member

pcmoore commented Jan 31, 2019

Since I think we've fixed this upstream - @WOnder93? - I'm going to close this issue, feel free to reopen if needed.

@pcmoore pcmoore closed this as completed Jan 31, 2019
UtsavBalar1231 pushed a commit to UtsavBalar1231/kernel_xiaomi_raphael that referenced this issue Jan 9, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
UtsavBalar1231 pushed a commit to UtsavBalar1231/kernel_xiaomi_raphael that referenced this issue Jan 9, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
rokibhasansagar pushed a commit to rokibhasansagar/aosp_common_kernels that referenced this issue Jan 9, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
rokibhasansagar pushed a commit to rokibhasansagar/aosp_common_kernels that referenced this issue Jan 9, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Change-Id: I0c3e122cbbf307ad558eb9283127b30118767a53
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
UtsavBalar1231 pushed a commit to UtsavBalar1231/kernel_xiaomi_raphael that referenced this issue Jan 9, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
XeonDead pushed a commit to F1xy-kernels/VIOLENT_Kernel_Ten that referenced this issue Jan 13, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
vantoman pushed a commit to vantoman/kernel_xiaomi_sm6150 that referenced this issue Jan 16, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84f)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
vantoman pushed a commit to vantoman/kernel_xiaomi_sm6150 that referenced this issue Jan 17, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84f)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
najahiiii pushed a commit to chips-project/msm-4.14 that referenced this issue Jan 21, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: UtsavisGreat <[email protected]>
najahiiii pushed a commit to chips-project/msm-4.14 that referenced this issue Feb 14, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Yousef Algadri <[email protected]>
najahiiii pushed a commit to chips-project/msm-4.14 that referenced this issue Feb 21, 2020
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Manually resolved conflicts in sidtab.c.
Change-Id: I681074f58e980be0a76f0e4978c5d97fae17c85e
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Yousef Algadri <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 20, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 20, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 20, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 22, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 22, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 22, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 22, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Sep 30, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 1, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 1, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 2, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 2, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 3, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 4, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 10, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 10, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 10, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 11, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 11, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 12, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 12, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 12, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 14, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 15, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 15, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 18, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/kernel_sony_tama that referenced this issue Oct 20, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/android_kernel_tama_sdm845 that referenced this issue Oct 22, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/android_kernel_tama_sdm845 that referenced this issue Oct 27, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sorayukii pushed a commit to Sorayukii/android_kernel_tama_sdm845 that referenced this issue Nov 1, 2024
Before this patch, during a policy reload the sidtab would become frozen
and trying to map a new context to SID would be unable to add a new
entry to sidtab and fail with -ENOMEM.

Such failures are usually propagated into userspace, which has no way of
distignuishing them from actual allocation failures and thus doesn't
handle them gracefully. Such situation can be triggered e.g. by the
following reproducer:

    while true; do load_policy; echo -n .; sleep 0.1; done &
    for (( i = 0; i < 1024; i++ )); do
        runcon -l s0:c$i echo -n x || break
        # or:
        # chcon -l s0:c$i <some_file> || break
    done

This patch overhauls the sidtab so it doesn't need to be frozen during
policy reload, thus solving the above problem.

The new SID table leverages the fact that SIDs are allocated
sequentially and are never invalidated and stores them in linear buckets
indexed by a tree structure. This brings several advantages:
  1. Fast SID -> context lookup - this lookup can now be done in
     logarithmic time complexity (usually in less than 4 array lookups)
     and can still be done safely without locking.
  2. No need to re-search the whole table on reverse lookup miss - after
     acquiring the spinlock only the newly added entries need to be
     searched, which means that reverse lookups that end up inserting a
     new entry are now about twice as fast.
  3. No need to freeze sidtab during policy reload - it is now possible
     to handle insertion of new entries even during sidtab conversion.

The tree structure of the new sidtab is able to grow automatically to up
to about 2^31 entries (at which point it should not have more than about
4 tree levels). The old sidtab had a theoretical capacity of almost 2^32
entries, but half of that is still more than enough since by that point
the reverse table lookups would become unusably slow anyway...

The number of entries per tree node is selected automatically so that
each node fits into a single page, which should be the easiest size for
kmalloc() to handle.

Note that the cache for reverse lookup is preserved with equivalent
logic. The only difference is that instead of storing pointers to the
hash table nodes it stores just the indices of the cached entries.

The new cache ensures that the indices are loaded/stored atomically, but
it still has the drawback that concurrent cache updates may mess up the
contents of the cache. Such situation however only reduces its
effectivity, not the correctness of lookups.

Tested by selinux-testsuite and thoroughly tortured by this simple
stress test:
```
function rand_cat() {
	echo $(( $RANDOM % 1024 ))
}

function do_work() {
	while true; do
		echo -n "system_u:system_r:kernel_t:s0:c$(rand_cat),c$(rand_cat)" \
			>/sys/fs/selinux/context 2>/dev/null || true
	done
}

do_work >/dev/null &
do_work >/dev/null &
do_work >/dev/null &

while load_policy; do echo -n .; sleep 0.1; done

kill %1
kill %2
kill %3
```

Link: SELinuxProject/selinux-kernel#38

Reported-by: Orion Poplawski <[email protected]>
Reported-by: Li Kun <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Reviewed-by: Stephen Smalley <[email protected]>
[PM: most of sidtab.c merged by hand due to conflicts]
[PM: checkpatch fixes in mls.c, services.c, sidtab.c]
Signed-off-by: Paul Moore <[email protected]>

(cherry picked from commit ee1a84fdfeedfd7362e9a8a8f15fedc3482ade2d)
Resolved conflict with b244131 which modified licences statements
and another commit which changed some selinux printk statements.
Change-Id: I793aa6e07e1672a2068e1995f2f45ae85d9d730c
Bug: 140252993
Signed-off-by: Jeff Vander Stoep <[email protected]>
Signed-off-by: Rapherion Rollerscaperers <[email protected]>
Signed-off-by: Chenyang Zhong <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants