Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add record field serialization #342

Merged
merged 9 commits into from
Mar 23, 2017
Merged

Conversation

BenSchwab
Copy link
Contributor

@BenSchwab BenSchwab commented Mar 22, 2017

While doing work on LRU cache I realized it would be nice to have the record serializer for record size estimation. So took a crack at closing #265.

Our current network parsing set-up follows this pattern BufferedSource -> Map<String, Object> -> T Data. Where BufferedSource represents a json, and the transformation to Map<String, Object>. This transformation is performed by ResponseJsonStreamReader.

When reading from the cache, we want to go from Cache -> Map<String, Object> where this map must be in nearly the same form as the network response. The difference is that this map replaces objects with CacheReferences.

It seemed bad to duplicate all the code in ResponseJsonStreamReader so I opened ResponseJsonStreamReader for extension, and write a very light CacheJsonStreamReader which strongly types CacheReferences instead of parsing them as strings. I don't love relying on inheritance (although it is very lightweight) so @sav007 if you have any other ideas of how to share the code here, please let me know.

Currently, when reading from the SQL store we get a string. This might be bad for the memory concerns that @digitalbuddha had. So I added a fromBufferedSource to the FieldAdapter (which we use under the hood in fromJson.) But we might need to expand the SQLStore to having some sort of streaming byte array for reading records.

@@ -36,7 +36,7 @@ dependencies {
compile dep.supportAnnotations
compile dep.okHttp
compile dep.moshi
provided project(":apollo-api")
compile project(":apollo-api")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to change this to compile for the test to work (SQLStoreTest now needs access to Optional in :apollo-api @marwanad any reason this needed to be provided?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it back to provided, but added compile to the test variants.

Copy link
Contributor

@marwanad marwanad Mar 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc, at that point, the apollo-api dependency wasn't really needed at all and we weren't' decided on how runtime would look like yet, we only had an Apollo class that wraps Retrofit and a Utils class. At that time, we also changed the plugin to add apollo-api deps to the compile dependency set.

We're gonna have to change it to compile anyways (#311). I guess for now we can keep it this way and I'll revisit this when I reopen #311.

public final class Record {

private final String key;
private final Map<String, Object> fields;
private int sizeEstimateBytes = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we remove memoizedSize and use instead sizeEstimateBytes = -1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I can't find the place where we set value to this field?

@@ -51,4 +83,17 @@ public String key() {
public Map<String, Object> fields() {
return fields;
}

public int sizeEstimateBytes() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who and when this will be called?

Copy link
Contributor Author

@BenSchwab BenSchwab Mar 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be used by a bounded memory LRU cache, (and possibly the sql cache -- though SQL might have a better way to track size). In the memory LRU cache, everytime a record is changed (via a merge) we would have to recalculate a size estimate.

I don't know if I love using the toJSON to determine the size, as that isn't very performant, which will be a requirment for the memory cache.

I'm going to remove the size stuff from this diff, as the serialization for Record is valuable in it's own right.

return calculateSizeEstimate(FieldsAdapter.create());
}

private int calculateSizeEstimate(FieldsAdapter adapter) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we inject this calculation to the place where we create Record, for instance:
in com.apollographql.android.cache.normalized.sql.SqlStore#cursorToRecord

there we have already String jsonOfFields = cursor.getString(2); so can we take size of jsonOfFields as size for record?

@BenSchwab BenSchwab changed the title Add record field serialiation, record size estimation Add record field serialization Mar 23, 2017
@BenSchwab
Copy link
Contributor Author

Decided to remove the record sizing stuff from this PR. I don't think calculating the size of the json string is the "right" way to do this, especially for the primary use case which is the memory lru cache.

Object scalar = super.readScalar(streamReader);
if (scalar instanceof String) {
String scalarString = (String) scalar;
if (CacheReference.isSerializedReference((String) scalar)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: CacheReference.isSerializedReference(scalarString)

return new CacheReference(matcher.group(1));
}

public static boolean isSerializedReference(String value) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should it be called canDeserialize ?

public static final String IN_MEMORY_DB = null; //null means db is memory only
private SqlStore sqlStore;

@Before
public void setUp() {
ApolloSqlHelper apolloSqlHelper = ApolloSqlHelper.create(InstrumentationRegistry.getTargetContext(),
IN_MEMORY_DB);
sqlStore = SqlStore.create(apolloSqlHelper, FieldsAdapter.create(new Moshi.Builder().build()));
sqlStore = SqlStore.create(apolloSqlHelper, FieldsAdapter.create());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason for this change? I'd prefer to be able to use 1 moshi instance for whole app apollo or otherwise

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing something of how Moshi works, but it seems dangerous to register our BigDecimal converter on a global Moshi instance, in case there is an existing converter. (Currently, it seems we do not allow a user to pass in a Moshi instance to Apollo anyway).

We could register these convertors on the Apollo's Moshi instance in ApolloClient. But that Moshi instance seems primarily concerned with serializing variable objects, and initializing cache serializers there would take some thought of how to do it properly without spaghetti dependencies in the ApolloClient builder.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking was from a performance standup. Moshi caches adapters but it's on a per instance level. So say I have same date converter on app and apollo client, I'd prefer to not have to cache both. Currently our app works with 1 instance of gson. It doesn't have to be solved now. I can do a pr if it becomes an issue


public Map<String, Object> fromJson(String fields) throws IOException {
final BufferedSource bufferSource = Okio.buffer(Okio.source(new ByteArrayInputStream(fields.getBytes())));
return fromBufferedSource(bufferSource);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick, it will read better as from(bufferedSource)


public Map<String, Object> fromBufferedSource(BufferedSource bufferedSource) throws IOException {
CacheJsonStreamReader jsonStreamReader
= new CacheJsonStreamReader(new BufferedSourceJsonReader(bufferedSource));
Copy link
Contributor

@digitalbuddha digitalbuddha Mar 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I like APIs with static calls for object creation like
ApolloReader.cacheJsonStream(ApolloReader. bufferedSourceJson(bufferedSource)
This way you can do static imports and become cacheJsonStream(bufferedSourceToJson(source))

Copy link
Contributor

@digitalbuddha digitalbuddha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing major just few nitpicks :-)

@BenSchwab BenSchwab merged commit 057cfc3 into master Mar 23, 2017
@BenSchwab BenSchwab deleted the bschwab--record-field-serialization branch March 23, 2017 23:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants