Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure that failed unis are not cached #39762

Merged
merged 1 commit into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
package io.quarkus.cache.test.runtime;

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertThrows;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;

import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.RegisterExtension;

import io.quarkus.cache.CacheResult;
import io.quarkus.test.QuarkusUnitTest;
import io.smallrye.mutiny.Uni;
import io.vertx.core.impl.NoStackTraceException;

public class UniReturnTypeWithFailureTest {

@RegisterExtension
static final QuarkusUnitTest TEST = new QuarkusUnitTest().withApplicationRoot((jar) -> jar.addClass(CachedService.class));

@Inject
CachedService cachedService;

@Test
void testCacheResult() {
assertThrows(NoStackTraceException.class, () -> cachedService.cacheResult("k1").await().indefinitely());
assertEquals(1, cachedService.getCacheResultInvocations());
assertEquals("", cachedService.cacheResult("k1").await().indefinitely());
assertEquals(2, cachedService.getCacheResultInvocations());
assertEquals("", cachedService.cacheResult("k1").await().indefinitely());
assertEquals(2, cachedService.getCacheResultInvocations());
}

@ApplicationScoped
static class CachedService {

private volatile int cacheResultInvocations;

@CacheResult(cacheName = "test-cache")
public Uni<String> cacheResult(String key) {
cacheResultInvocations++;
if (cacheResultInvocations == 1) {
return Uni.createFrom().failure(new NoStackTraceException("dummy"));
}
return Uni.createFrom().item(() -> new String());
}

public int getCacheResultInvocations() {
return cacheResultInvocations;
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,11 @@ public Uni<Object> apply(Object key) {
throw new CacheException(e);
}
}
}).onFailure().call(new Function<>() {
@Override
public Uni<?> apply(Throwable throwable) {
return cache.invalidate(key).replaceWith(throwable);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was considering backporting this one and I have a question: my understanding is that we invalidate the cache key if we have an error?
I would expect us to not store anything in this case? And thus we wouldn't have to invalidate the cache? I'm especially worried in the case of concurrent accesses because I wouldn't expect us to invalidate a cache entry that could have been stored from another thread.

Now it's reactive code so I'm not understanding exactly what it does but this concerns me a bit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your understanding is correct, but given the API, I am not sure how it can be done differently

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah my problem is that again IIUC we have a very short period of time when the error is cached so another thread can get it.
Also we have a short period of time when we might invalidate an actually valid key but this is probably more theoretical (as I wouldn't expect another thread to store a cache entry) and less problematic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's certainly true

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gwenneg could we have some feedback from you here? Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gsmet, you are right. There is a chance to have concurrent access and replace a correct value. Reactive or not does not change anything here.

For Redis, we could imagine using a Redis transaction (be careful, it is not a database transaction). For others, I'm not sure. We could imagine adding an atomic operation doing this.

}
}).emitOn(new Executor() {
// We need make sure we go back to the original context when the cache value is computed.
// Otherwise, we would always emit on the context having computed the value, which could
Expand Down
Loading