Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pylint install for travis. #3

Merged
merged 1 commit into from
Jan 19, 2016
Merged

Fix pylint install for travis. #3

merged 1 commit into from
Jan 19, 2016

Conversation

chumer
Copy link
Member

@chumer chumer commented Jan 19, 2016

No description provided.

chumer added a commit that referenced this pull request Jan 19, 2016
Fix pylint install for travis.
@chumer chumer merged commit 43f9e27 into master Jan 19, 2016
@chumer chumer deleted the travis_update branch January 19, 2016 16:09
@@ -1,6 +1,12 @@
language: java
python:
- "2.7"
jdk:
- oraclejdk8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what this line means, but I remind you that Truffle has to build with JDK7.

jtulach pushed a commit to jtulach/truffle that referenced this pull request Feb 16, 2016
dougxc pushed a commit that referenced this pull request Apr 15, 2016
….COM/truffle:fix-REPL-kill to master

Update the REPL debugger after change in handling of KillException.

* commit '34a435271d216211c708cf233a25df31abcf0565':
  REPL - tighten identification of KillException without dependency.
  REPL debugger:  fix handling of Kill reimplemented as subclass of ThreadDeath.
XiaohongGong pushed a commit to XiaohongGong/graal that referenced this pull request Mar 25, 2019
Currently decoding a compressed pointer with zero base will generate
the following code:
add x0, xzr, x0, lsl oracle#3

This patch will optimize it to:
lsl x0, x0, oracle#3

Here is a jmh benchmark to decode a compressed pointer:
public class UncompressPointer {
    private static final int field0 = 1000000;
    private int field1;

    static class A {
        B b;

        A(B b) {
            this.b = b;
        }
    }

    static class B {
        int field;

        B(int i) {
            this.field = i;
        }
    }

    static A[] arr;

    static {
        arr = new A[field0];
        for (int i = 0; i < field0; i++)
           arr[i] = new A(new B(i));
    }

    public static int func() {
        int result = 0;
        for (int i = 0; i < field0; i++) {
            result += arr[i].b.field;
        }
        return result;
    }

    @benchmark
    public void jmhUncompressPointer() {
        field1 = func();
    }
}
And the performance results:
                         Score      Error    Units
Without this patch     21598.534 ? 2188.242  us/op
With this patch        19322.791 ? 1219.022  us/op

Change-Id: I5af6799eb82470d5598f5991e9210f99874f458a
chrisseaton referenced this pull request in Shopify/graal Dec 4, 2019
Fix NPE in coverage when a source has no path
e1iu pushed a commit to e1iu/graal that referenced this pull request Feb 20, 2020
This patch implements two types of match rules like C2.

1. Merge shift pair into add/sub.
2. Merge integral narrow into add/sub.

E.g. Below code is generated for `x + (((y << 56) >> 56) << 3)`

        lsl x0, x3, oracle#56
        asr x0, x0, oracle#56
        add x0, x2, x0, lsl oracle#3

After this patch, generated assembly above can be optimized to below:

        add x0, x2, w3, sxtb oracle#3

The test cases in this patch can show more details about those match
rules.

TEST_IMG: ubuntu/graal-test
TEST_CMD: safe ./projects/jdk/graal-build-test.sh       \
TEST_CMD:       --suite substratevm                     \
TEST_CMD:       --tags "fullbuild,test,helloworld"      \
TEST_CMD:       --refspec $GERRIT_REFSPEC
TEST_CMD:
TEST_CMD: safe ./projects/jdk/graal-build-test.sh       \
TEST_CMD:       --suite compiler                        \
TEST_CMD:       --refspec $GERRIT_REFSPEC

Change-Id: I71b392eefd990d2ad838afc06fccebb3438e467c
Jira: ENTLLT-2455
Jira: ENTLLT-1927
e1iu pushed a commit to e1iu/graal that referenced this pull request Feb 21, 2020
…/sub

This patch implements two types of match rules like C2.

1. Merge shift pair into add/sub.
2. Merge integral narrow into add/sub.

E.g. Below code is generated for `x + (((y << 56) >> 56) << 3)`

        lsl x0, x3, oracle#56
        asr x0, x0, oracle#56
        add x0, x2, x0, lsl oracle#3

After this patch, generated assembly above can be optimized to below:

        add x0, x2, w3, sxtb oracle#3

The test cases in this patch can show more details about those match
rules.

Change-Id: I82502186c8e9745dc07c349c3f66eae573006165
e1iu pushed a commit to e1iu/graal that referenced this pull request Feb 21, 2020
This patch implements two types of match rules like C2.

1. Merge shift pair into add/sub.
2. Merge integral narrow into add/sub.

E.g. Below code is generated for `x + (((y << 56) >> 56) << 3)`

        lsl x0, x3, oracle#56
        asr x0, x0, oracle#56
        add x0, x2, x0, lsl oracle#3

After this patch, generated assembly above can be optimized to below:

        add x0, x2, w3, sxtb oracle#3

The test cases in this patch can show more details about those match
rules.

Change-Id: I82502186c8e9745dc07c349c3f66eae573006165
e1iu pushed a commit to e1iu/graal that referenced this pull request Feb 21, 2020
This patch implements two types of match rules like C2.

1. Merge shift pair into add/sub.
2. Merge integral narrow into add/sub.

E.g. Below code is generated for `x + (((y << 56) >> 56) << 3)`

        lsl x0, x3, oracle#56
        asr x0, x0, oracle#56
        add x0, x2, x0, lsl oracle#3

After this patch, generated assembly above can be optimized to below:

        add x0, x2, w3, sxtb oracle#3

The test cases in this patch can show more details about those match
rules.

Change-Id: I82502186c8e9745dc07c349c3f66eae573006165
e1iu pushed a commit to e1iu/graal that referenced this pull request Mar 9, 2020
1. Merge shift pair into add/sub.
2. Merge integral narrow into add/sub.

E.g. Below code is generated for `x + (((y << 56) >> 56) << 3)`

        lsl x0, x3, oracle#56
        asr x0, x0, oracle#56
        add x0, x2, x0, lsl oracle#3

After this patch, generated assembly above can be optimized to below:

        add x0, x2, w3, sxtb oracle#3

The test cases in this patch can show more details about those match
rules.

Change-Id: I82502186c8e9745dc07c349c3f66eae573006165
XiaohongGong pushed a commit to XiaohongGong/graal that referenced this pull request Apr 20, 2020
…inter.

Trapping nullcheck might generate two uncompress instructions for the
same compressed oop in Graal. One is inserted by the backend when it
emits nullcheck. If the pointer is a compressed object, it should be
uncompressed before the nullcheck is emitted. And another one is
generated by the normal uncompressing operation. These two instructions
are duplicated with each other.
The generated codes on AArch64 like:

  ldr   w0, [x0,oracle#112]
  lsl   x2, x0, oracle#3      ; uncompressing (first)
  ldr   xzr, [x2]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  lsl   x0, x0, oracle#3      ; uncompressing (second)
  str   w1, [x0,oracle#12]

A simple way to avoid this is to apply the nullcheck to the uncompressed
result if it exists instead of to the compressed pointer when generating
the trapping nullcheck.
With the modification, the codes above could be optimized to:

  ldr   w0, [x0,oracle#112]
  lsl   x0, x0, oracle#3      ; uncompressing
  ldr   xzr, [x0]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  str   w1, [x0,oracle#12]

Change-Id: Iabfe47bbf984ed11c42555f84bdd0ccf2a5bdddb
XiaohongGong pushed a commit to XiaohongGong/graal that referenced this pull request Apr 23, 2020
…inter.

Trapping nullcheck might generate two uncompress instructions for the
same compressed oop in Graal. One is inserted by the backend when it
emits nullcheck. If the pointer is a compressed object, it should be
uncompressed before the nullcheck is emitted. And another one is
generated by the normal uncompressing operation. These two instructions
are duplicated with each other.
The generated codes on AArch64 like:

  ldr   w0, [x0,oracle#112]
  lsl   x2, x0, oracle#3      ; uncompressing (first)
  ldr   xzr, [x2]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  lsl   x0, x0, oracle#3      ; uncompressing (second)
  str   w1, [x0,oracle#12]

A simple way to avoid this is to apply the nullcheck to the uncompressed
result if it exists instead of to the compressed pointer when generating
the trapping nullcheck.
With the modification, the codes above could be optimized to:

  ldr   w0, [x0,oracle#112]
  lsl   x0, x0, oracle#3      ; uncompressing
  ldr   xzr, [x0]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  str   w1, [x0,oracle#12]

Change-Id: Iabfe47bbf984ed11c42555f84bdd0ccf2a5bdddb
lazar-mitrovic added a commit to lazar-mitrovic/graal that referenced this pull request May 7, 2020
lazar-mitrovic added a commit to lazar-mitrovic/graal that referenced this pull request May 7, 2020
lazar-mitrovic added a commit to lazar-mitrovic/graal that referenced this pull request May 8, 2020
XiaohongGong pushed a commit to XiaohongGong/graal that referenced this pull request May 9, 2020
Trapping nullcheck might generate two uncompress instructions for
the same compressed oop on AArch64. One is inserted by the backend
when it emits nullcheck. If the object is a compressed pointer, it
is uncompressed before the nullcheck is emitted. And another one is
generated by the uncompression node used for memory access. These
two instructions are duplicated with each other.

The generated codes on AArch64 like:

  ldr   w0, [x0,oracle#112]
  lsl   x2, x0, oracle#3      ; uncompressing (first)
  ldr   xzr, [x2]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  lsl   x0, x0, oracle#3      ; uncompressing (second)
  str   w1, [x0,oracle#12]

A simple way to avoid this is to creat a new uncompression node for
the nullcheck, and let the value numbering remove the duplicated one
if possible. Since the address lowering of AMD64 can handle the
uncompressing computation for address, the created uncompression node
is wrapped to an address node and the nullcheck is finally applied on
the address.

With the modification, the codes above could be optimized to:

  ldr   w0, [x0,oracle#112]
  lsl   x0, x0, oracle#3      ; uncompressing
  ldr   xzr, [x0]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  str   w1, [x0,oracle#12]

Change-Id: Iabfe47bbf984ed11c42555f84bdd0ccf2a5bdddb
XiaohongGong pushed a commit to XiaohongGong/graal that referenced this pull request May 13, 2020
Trapping nullcheck might generate two uncompress instructions for
the same compressed oop on AArch64. One is inserted by the backend
when it emits nullcheck. If the object is a compressed pointer, it
is uncompressed before the nullcheck is emitted. And another one is
generated by the uncompression node used for memory access. These
two instructions are duplicated with each other.

The generated codes on AArch64 like:

  ldr   w0, [x0,oracle#112]
  lsl   x2, x0, oracle#3      ; uncompressing (first)
  ldr   xzr, [x2]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  lsl   x0, x0, oracle#3      ; uncompressing (second)
  str   w1, [x0,oracle#12]

A simple way to avoid this is to creat a new uncompression node for
the nullcheck, and let the value numbering remove the duplicated one
if possible. Since the address lowering of AMD64 can handle the
uncompressing computation for address, the created uncompression node
is wrapped to an address node and the nullcheck is finally applied on
the address.

With the modification, the codes above could be optimized to:

  ldr   w0, [x0,oracle#112]
  lsl   x0, x0, oracle#3      ; uncompressing
  ldr   xzr, [x0]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  str   w1, [x0,oracle#12]

Change-Id: Iabfe47bbf984ed11c42555f84bdd0ccf2a5bdddb
XiaohongGong pushed a commit to XiaohongGong/graal that referenced this pull request Nov 6, 2020
Trapping nullcheck might generate two uncompress instructions for
the same compressed oop on AArch64. One is inserted by the backend
when it emits nullcheck. If the object is a compressed pointer, it
is uncompressed before the nullcheck is emitted. And another one is
generated by the uncompression node used for memory access. These
two instructions are duplicated with each other.

The generated codes on AArch64 like:

  ldr   w0, [x0,oracle#112]
  lsl   x2, x0, oracle#3      ; uncompressing (first)
  ldr   xzr, [x2]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  lsl   x0, x0, oracle#3      ; uncompressing (second)
  str   w1, [x0,oracle#12]

A simple way to avoid this is to creat a new uncompression node for
the nullcheck, and let the value numbering remove the duplicated one
if possible. Since the address lowering of AMD64 can handle the
uncompressing computation for address, the created uncompression node
is wrapped to an address node and the nullcheck is finally applied on
the address.

With the modification, the codes above could be optimized to:

  ldr   w0, [x0,oracle#112]
  lsl   x0, x0, oracle#3      ; uncompressing
  ldr   xzr, [x0]       ; implicit exception: deoptimizes
  ......                ; fixed operations
  str   w1, [x0,oracle#12]
zakkak added a commit to zakkak/mandrel that referenced this pull request Oct 10, 2024
Backport of GR-52454: Include signal exit handlers in the image build if JFR
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants