-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AES-CBC #380
AES-CBC #380
Conversation
3b64d33
to
755c978
Compare
I did a first pass, but still need to go through how the buffering works and how we handle the padding/non-padding cases. I normally don't like to police commit messages and PR descriptions, but this PR could use some more context and explanation around what's going on. This helps with the review process and also helps future developers figure out what happened in the code. This is especially important for large diffs like this. Things that would be helpful to explain:
The README.md file needs to be updated to include the new algorithm. Curious on your thoughts why we pulled in the NIST test vectors for this algorithm. ACCP doesn't consistently pull-in NIST vectors and tests for all algorithms. This PR accounts for half the rsp.gz files that we've imported. It's unclear to me what decision criteria is used for when we do vector testing in ACCP in addition to AWS-LC's extensive vector testing. |
src/com/amazon/corretto/crypto/provider/AmazonCorrettoCryptoProvider.java
Show resolved
Hide resolved
@ExtendWith(TestResultLogger.class) | ||
@Execution(ExecutionMode.CONCURRENT) | ||
@ResourceLock(value = TestUtil.RESOURCE_GLOBAL, mode = ResourceAccessMode.READ) | ||
public class AesCbcNistTest { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure about whether or not we should be including NIST test vectors in ACCP. This is not something we do for all algorithms in ACCP. More testing is always great, but I think it's a bit redundant with AWS-LC's testing. Our focus should be more on tests that surface issues around interfacing with the native layer and less so around exhaustive validation of the cryptographic implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For other algorithms, I noticed that we have known answer tests (KATs), so I looked for AES/CBC KATs and the NIST website showed up. I'm just trying to treat AES/CBC the same way that other algorithms have been treated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that we have them for other algorithms, but not for //all// algorithms. I'm trying to understand what our decision criteria is for including or not including KATs. It seems pretty random. Why are we exhaustively testing every single KAT style/variant for CBC but not for GCM? It just seems off to me for us to be KAT testing AES-CBC much more thoroughly than literally every other algorithm in ACCP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we never formally specified when to add KATs. My rule of thumb is to search online for KATs and if found, then add.
* Added comments to the code and moved some methods around to imporve readability. * Added a test to show that BouncCastle recognizes PKCS7Padding but SunJCE does not.
I think I've answered all your comments. |
Yes, I see it all in the code. To clarify though, I was really hoping to see some of these design decisions documented in the PR itself. That helps me a lot when trying to reason out why certain code looks the way it does. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still reviewing, about 1/2 way through.
@@ -603,5 +603,30 @@ class JByteArrayCritical { | |||
jbyteArray jarray_; | |||
}; | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggest some documentation here. what is this class? how does it differ from other buffer types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, i'd meant doc comments for the new ShimBuffer, but it's not super important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries. I added comments to ShimByteBuffer as well.
csrc/buffer.cpp
Outdated
SimpleBuffer::SimpleBuffer(int size) | ||
: buffer_(nullptr) | ||
{ | ||
buffer_ = (uint8_t*)malloc(size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we sure we want to use libc malloc here? why not use the same allocator as in other primitives?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we only have a SecureBuffer that allocates on stack. The scratch buffer size used when output clobbers input depends on the input/output buffers at runtime, so don't know the size before hand.
I don't think there is any issue in using malloc/free. The only concern could be that we do not zero memory after deallocation. The buffer is used to hold cipherText/plainText. In that sense it's as secure as how Java treats buffers by the garbage collector.
I'm open to suggestion to see if zeroing memory is needed before freeing up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two things:
- The code is updated so that before freeing the memory it is zeroed.
- I see that we have SecureAlloc that could potentially be used. The main issue with this class is that it throws
std::bad_alloc
. In our JNI code, we only catchjava_ex
, which means any other type of exception could potentially result in undefined behavior in JNI calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replacing SecureAlloc with OPENSSL_cleanse
makes sense. I wonder if we should extend that idea a little further, using OPENSSL_malloc
and OPENSSL_free
in SimpleBuffer. This would provide two benefits:
- Automatically call
OPENSSL_cleanse
on free - Automatically use the same underlying allocator as AWS-LC in case they ever decide to switch to e.g. jemalloc or something. due to portability concerns, IMO this is unlikely.
not a blocker at all, just perhaps something to consider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great suggestion. I changed it so that it will use OPENSS_{malloc,free}.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just two small open comment threads to consider here or in future PRs.
Description of changes:
Adding support for AES-CBC with no padding and with PKCS7 padding.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.