-
Notifications
You must be signed in to change notification settings - Fork 656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add record field serialization #342
Conversation
apollo-runtime/build.gradle
Outdated
@@ -36,7 +36,7 @@ dependencies { | |||
compile dep.supportAnnotations | |||
compile dep.okHttp | |||
compile dep.moshi | |||
provided project(":apollo-api") | |||
compile project(":apollo-api") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to change this to compile for the test to work (SQLStoreTest
now needs access to Optional
in :apollo-api
@marwanad any reason this needed to be provided
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it back to provided, but added compile to the test variants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iirc, at that point, the apollo-api
dependency wasn't really needed at all and we weren't' decided on how runtime would look like yet, we only had an Apollo
class that wraps Retrofit
and a Utils
class. At that time, we also changed the plugin to add apollo-api
deps to the compile dependency set.
We're gonna have to change it to compile
anyways (#311). I guess for now we can keep it this way and I'll revisit this when I reopen #311.
public final class Record { | ||
|
||
private final String key; | ||
private final Map<String, Object> fields; | ||
private int sizeEstimateBytes = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we remove memoizedSize
and use instead sizeEstimateBytes = -1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I can't find the place where we set value to this field?
@@ -51,4 +83,17 @@ public String key() { | |||
public Map<String, Object> fields() { | |||
return fields; | |||
} | |||
|
|||
public int sizeEstimateBytes() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who and when this will be called?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be used by a bounded memory LRU cache, (and possibly the sql cache -- though SQL might have a better way to track size). In the memory LRU cache, everytime a record is changed (via a merge) we would have to recalculate a size estimate.
I don't know if I love using the toJSON
to determine the size, as that isn't very performant, which will be a requirment for the memory cache.
I'm going to remove the size stuff from this diff, as the serialization for Record
is valuable in it's own right.
return calculateSizeEstimate(FieldsAdapter.create()); | ||
} | ||
|
||
private int calculateSizeEstimate(FieldsAdapter adapter) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we inject this calculation to the place where we create Record, for instance:
in com.apollographql.android.cache.normalized.sql.SqlStore#cursorToRecord
there we have already String jsonOfFields = cursor.getString(2);
so can we take size of jsonOfFields
as size for record?
Decided to remove the record sizing stuff from this PR. I don't think calculating the size of the json string is the "right" way to do this, especially for the primary use case which is the memory lru cache. |
Object scalar = super.readScalar(streamReader); | ||
if (scalar instanceof String) { | ||
String scalarString = (String) scalar; | ||
if (CacheReference.isSerializedReference((String) scalar)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: CacheReference.isSerializedReference(scalarString)
return new CacheReference(matcher.group(1)); | ||
} | ||
|
||
public static boolean isSerializedReference(String value) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should it be called canDeserialize
?
public static final String IN_MEMORY_DB = null; //null means db is memory only | ||
private SqlStore sqlStore; | ||
|
||
@Before | ||
public void setUp() { | ||
ApolloSqlHelper apolloSqlHelper = ApolloSqlHelper.create(InstrumentationRegistry.getTargetContext(), | ||
IN_MEMORY_DB); | ||
sqlStore = SqlStore.create(apolloSqlHelper, FieldsAdapter.create(new Moshi.Builder().build())); | ||
sqlStore = SqlStore.create(apolloSqlHelper, FieldsAdapter.create()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason for this change? I'd prefer to be able to use 1 moshi instance for whole app apollo or otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing something of how Moshi works, but it seems dangerous to register our BigDecimal converter on a global Moshi instance, in case there is an existing converter. (Currently, it seems we do not allow a user to pass in a Moshi instance to Apollo anyway).
We could register these convertors on the Apollo's Moshi instance in ApolloClient. But that Moshi instance seems primarily concerned with serializing variable objects, and initializing cache serializers there would take some thought of how to do it properly without spaghetti dependencies in the ApolloClient builder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking was from a performance standup. Moshi caches adapters but it's on a per instance level. So say I have same date converter on app and apollo client, I'd prefer to not have to cache both. Currently our app works with 1 instance of gson. It doesn't have to be solved now. I can do a pr if it becomes an issue
|
||
public Map<String, Object> fromJson(String fields) throws IOException { | ||
final BufferedSource bufferSource = Okio.buffer(Okio.source(new ByteArrayInputStream(fields.getBytes()))); | ||
return fromBufferedSource(bufferSource); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick, it will read better as from(bufferedSource)
|
||
public Map<String, Object> fromBufferedSource(BufferedSource bufferedSource) throws IOException { | ||
CacheJsonStreamReader jsonStreamReader | ||
= new CacheJsonStreamReader(new BufferedSourceJsonReader(bufferedSource)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I like APIs with static calls for object creation like
ApolloReader.cacheJsonStream(ApolloReader. bufferedSourceJson(bufferedSource)
This way you can do static imports and become cacheJsonStream(bufferedSourceToJson(source))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing major just few nitpicks :-)
While doing work on LRU cache I realized it would be nice to have the record serializer for record size estimation. So took a crack at closing #265.
Our current network parsing set-up follows this pattern
BufferedSource
->Map<String, Object>
->T Data
. WhereBufferedSource
represents a json, and the transformation toMap<String, Object>
. This transformation is performed byResponseJsonStreamReader
.When reading from the cache, we want to go from
Cache
->Map<String, Object>
where this map must be in nearly the same form as the network response. The difference is that this map replaces objects withCacheReference
s.It seemed bad to duplicate all the code in
ResponseJsonStreamReader
so I openedResponseJsonStreamReader
for extension, and write a very lightCacheJsonStreamReader
which strongly typesCacheReferences
instead of parsing them as strings. I don't love relying on inheritance (although it is very lightweight) so @sav007 if you have any other ideas of how to share the code here, please let me know.Currently, when reading from the SQL store we get a string. This might be bad for the memory concerns that @digitalbuddha had. So I added a
fromBufferedSource
to theFieldAdapter
(which we use under the hood infromJson
.) But we might need to expand the SQLStore to having some sort of streaming byte array for reading records.