Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up writeVInt #62345

Merged
merged 11 commits into from
Sep 15, 2020
Merged

Speed up writeVInt #62345

merged 11 commits into from
Sep 15, 2020

Conversation

nik9000
Copy link
Member

@nik9000 nik9000 commented Sep 14, 2020

This speeds up StreamOutput#writeVInt quite a bit which is nice
because it is very commonly called when serializing aggregations. Well,
when serializing anything. All "collections" serialize their size as a
vint. Anyway, I was examining the serialization speeds of StringTerms
and this saves about 30% of the write time for that. I expect it'll be
useful other places.

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra (:Core/Infra/Core)

@elasticmachine elasticmachine added the Team:Core/Infra Meta label for core/infra team label Sep 14, 2020
This speeds up `StreamOutput#writeVInt` quite a bit which is nice
because it is *very* commonly called when serializing aggregations. Well,
when serializing anything. All "collections" serialize their size as a
vint. Anyway, I was examining the serialization speeds of `StringTerms`
and this saves about 30% of the write time for that. I expect it'll be
useful other places.
@@ -78,7 +78,6 @@ cd fcml*
make
cd example/hsdis
make
cp .libs/libhsdis.so.0.0.0
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just wrong.

}

@Benchmark
public DelayableWriteable<InternalAggregations> serialize() {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure if we actually want this benchmark, especially compared to the one that @jimczi showed me. But it is fairly targeted which can be useful.

@nik9000
Copy link
Member Author

nik9000 commented Sep 14, 2020

this saves about 30% of the write time for that

The attached benchmark goes from 90ms to serialize the agg to 60ms to serialize the agg. This has a million buckets which is quite a bit, but nothing we don't bump into from time to time.

* together this saves quite a bit of time compared to a naive
* implementation.
*/
switch (Integer.numberOfLeadingZeros(i)) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gets compiled to lzcnt and the JVM's tableswitch. At this point the overhead of the buffer and BigArrays dominates the method.

Copy link
Member

@original-brownbear original-brownbear left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice find! Let me know what you think about my inline point.

case 27:
case 26:
case 25:
writeByte((byte) i);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love this until here :) The fact that we can special case number of leading zeros > 24 is pretty significant and I can see the ~30% performance gain as well.

Hard coding all possible offsets below and doing all the buffer getting and writeBytes inline with those hard coded offsets I don't think is a good idea. This blows up the method size significantly for a tiny saving in CPU when it comes to evaluating the loop.

I benchmarked both this version and:

    public void writeVInt(int i) throws IOException {
        if (Integer.numberOfLeadingZeros(i) > 24) {
            writeByte((byte) i);
        } else {
            final byte[] buffer = scratch.get();
            int index = 0;
            do {
                buffer[index++] = ((byte) ((i & 0x7f) | 0x80));
                i >>>= 7;
            } while ((i & ~0x7F) != 0);
            buffer[index++] = ((byte) i);
            writeBytes(buffer, 0, index);
        }
    }

and I can't see statistically significant difference so that's not worth the complication IMO.

I would in fact expect the above version with the loop to be faster than what is in this PR in the real world because the smaller method size has a better better chance of getting inlined in some places (73 vs 507 bytes on JDK14/Linux for me).

I suppose you could work around the code bloat by doing this:

        final int leadingZeros = Integer.numberOfLeadingZeros(i);
        if (Integer.numberOfLeadingZeros(i) > 24) {
            writeByte((byte) i);
        } else {
            final byte[] buffer = scratch.get();
            final int length;
            switch (leadingZeros) {
                case 24:
                case 23:
                case 22:
                case 21:
                case 20:
                case 19:
                case 18:
                    buffer[0] = (byte) (i & 0x7f | 0x80);
                    buffer[1] = (byte) (i >>> 7);
                    assert buffer[1] <= 0x7f;
                    length = 2;
                    break;
                case 17:
                case 16:
                case 15:
                case 14:
                case 13:
                case 12:
                case 11:
                    buffer[0] = (byte) (i & 0x7f | 0x80);
                    buffer[1] = (byte) ((i >>> 7) & 0x7f | 0x80);
                    buffer[2] = (byte) (i >>> 14);
                    assert buffer[2] <= 0x7f;
                    length = 3;
                    break;
                case 10:
                case 9:
                case 8:
                case 7:
                case 6:
                case 5:
                case 4:
                    buffer[0] = (byte) (i & 0x7f | 0x80);
                    buffer[1] = (byte) ((i >>> 7) & 0x7f | 0x80);
                    buffer[2] = (byte) ((i >>> 14) & 0x7f | 0x80);
                    buffer[3] = (byte) (i >>> 21);
                    assert buffer[3] <= 0x7f;
                    length = 4;
                    break;
                case 3:
                case 2:
                case 1:
                case 0:
                    buffer[0] = (byte) (i & 0x7f | 0x80);
                    buffer[1] = (byte) ((i >>> 7) & 0x7f | 0x80);
                    buffer[2] = (byte) ((i >>> 14) & 0x7f | 0x80);
                    buffer[3] = (byte) ((i >>> 21) & 0x7f | 0x80);
                    buffer[4] = (byte) (i >>> 28);
                    assert buffer[4] <= 0x7f;
                    length = 5;
                    break;
                default:
                    throw new UnsupportedOperationException(
                            "Can't encode [" + i + "]. Missing case for [" + Integer.numberOfLeadingZeros(i) + "]?"
                    );
            }
            writeBytes(buffer, 0, length);
        }

but I can't measure a performance difference to the loop at all so personally I'd go for the shorter loop just for simplicity's sake.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think your right about your implementation being faster in practice. I put together a quick and dirty benchmark for writeVInt directly and my hand unrolled thing is faster there. By a pretty wide margin. But the benchmark for serializing the agg result is slower. I can see in the decompiled output that my method results in writeVInt not being inlined because it is too big like you say. And yours gets it inlined.

Optimize for size
Benchmark                                        (buckets)  Mode  Cnt   Score   Error  Units
StringTermsSerializationBenchmark.serialize           1000  avgt   10  59.064 ± 0.360  ms/op
StringTermsSerializationBenchmark.serializeVint       1000  avgt   10  17.211 ± 0.088  ms/op

Unroll loops
Benchmark                                        (buckets)  Mode  Cnt   Score   Error  Units
StringTermsSerializationBenchmark.serialize           1000  avgt   10  61.560 ± 0.124  ms/op
StringTermsSerializationBenchmark.serializeVint       1000  avgt   10  11.775 ± 0.048  ms/op

Unroll loops with if instead of switch
Benchmark                                        (buckets)  Mode  Cnt   Score   Error  Units
StringTermsSerializationBenchmark.serialize           1000  avgt   10  60.794 ± 1.069  ms/op
StringTermsSerializationBenchmark.serializeVint       1000  avgt   10  17.703 ± 0.075  ms/op

Compromise
Benchmark                                        (buckets)  Mode  Cnt   Score   Error  Units
StringTermsSerializationBenchmark.serialize           1000  avgt   10  62.106 ± 0.173  ms/op
StringTermsSerializationBenchmark.serializeVint       1000  avgt   10  16.425 ± 0.033  ms/op

The compromise solution doesn't seem to shrink the method enough.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for testing this!

}
return buffer.bytes();
}
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we probably don't want to keep this benchmark but I pushed it so you could see what I was using for the numbers I shared.

Copy link
Member

@original-brownbear original-brownbear left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (excluding the vint benchmark, that doesn't really add much in isolation due to method size effects)

It was good to look at, but we don't need to commit it.
@nik9000 nik9000 merged commit dfc4539 into elastic:master Sep 15, 2020
nik9000 added a commit to nik9000/elasticsearch that referenced this pull request Sep 15, 2020
This speeds up `StreamOutput#writeVInt` quite a bit which is nice
because it is *very* commonly called when serializing aggregations. Well,
when serializing anything. All "collections" serialize their size as a
vint. Anyway, I was examining the serialization speeds of `StringTerms`
and this saves about 30% of the write time for that. I expect it'll be
useful other places.
nik9000 added a commit that referenced this pull request Sep 15, 2020
This speeds up `StreamOutput#writeVInt` quite a bit which is nice
because it is *very* commonly called when serializing aggregations. Well,
when serializing anything. All "collections" serialize their size as a
vint. Anyway, I was examining the serialization speeds of `StringTerms`
and this saves about 30% of the write time for that. I expect it'll be
useful other places.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Core/Infra/Core Core issues without another label >feature Team:Core/Infra Meta label for core/infra team v7.10.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants