Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance overhead of DebugProbesImpl for non-concurrent coroutines #3527

Closed
qwwdfsad opened this issue Nov 15, 2022 · 3 comments
Closed

Comments

@qwwdfsad
Copy link
Contributor

Snippet to reproduce:

class DebugProbesTest : DebugTestBase() {
    fun <Node> generateRecursiveSequence(initialSequence: Sequence<Node>, children: (Node) -> Sequence<Node>): Sequence<Node> {
        return sequence {
            val initialIterator = initialSequence.iterator()
            if (!initialIterator.hasNext()) {
                return@sequence
            }
            val visited = HashSet<Node>()
            val sequences = ArrayDeque<Sequence<Node>>()
            sequences.addLast(initialIterator.asSequence())
            while (sequences.isNotEmpty()) {
                val currentSequence = sequences.removeFirst()
                for (node in currentSequence) {
                    if (visited.add(node)) {
                        yield(node)
                        sequences.addLast(children(node))
                    }
                }
            }
        }
    }

    @Volatile
    var a = 2
    @Test
    fun stressGenerateRecursive() {
        while (true) {
            runBlocking {
                repeat(8) {
                    launch(Dispatchers.Default) {
                        val seq = generateRecursiveSequence((1..100).asSequence()) {
                            (1..it).asSequence()
                        }

                        for (i in seq) {
                            a = i
                        }
                    }
                }
            }
        }
    }
}

Profile:
deb

Up to 30% is occupied by updateState

@qwwdfsad
Copy link
Contributor Author

qwwdfsad commented Nov 15, 2022

It seems like a ReentrantLock is a rudiment from pre-concurrent implementation times.
It indeed incurs some non-trivial overhead, while providing only a single benefit -- strongly consistent snapshot in read operations (dump/hierarchyToString etc.). The guarantee seems to be way too strong for such an overhead, and the proposed solution is just to remove this lock.

The trick here is to ensure that we do have a proper ordered access to memory locations everywhere

@maxmedvedev
Copy link

Hey, when is this fix going to be released?

@qwwdfsad
Copy link
Contributor Author

It's already been released as part of 1.7.0-Beta

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants