-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
solr indexing desynchronized with riak bucket [JIRA: RIAK-2568] #642
Comments
another issue: checked via :8093/internal_solr/my-bucket-index/select?q=id:111 on each node but another 3 nodes returns different results: node1:
node2:
node3:
strange difference in _yz_id field |
Hello @mogadanez, Can you tell us which version of Riak you are using, and whether your have AAE enabled? We have recently made several fixes to the 2.0 branch (resulting in the 2.0.7 release), in addition to some feature improvements. One of the bug fixes was around AAE and detection of differences in hash trees, which it is possible you are seeing here. One way to see if you are vulnerable to this bug is to rebuild your YZ AAE trees and see whether these mysterious Solr entries get deleted. Ideally, you should do that on 2.0.7, as there are some subtle bugs in other releases that can cause these differences to go undetected. We are currently working to merge these fixes forward to a later version of Riak. |
2.1.4 |
have a buckettype "maps":
'{"props":{"datatype":"map"}}'
at some point some record was "cleared": by send operation with removeCounter | removeFlag | removeSet | removeMap | removeRegister for all all present fields
as result querying for /types/maps/buckets/my-bucket/datatypes/key
returns
but search via solr still returns this record as in before clean.
what is preffered way to prevent;debug such errors?
The text was updated successfully, but these errors were encountered: