You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Main Data generation and storing
1.fetching a readme file.
2.parsing for repos (owner/repoName) and companies (name) data.
3.making GQL request for every company and repo using the above
4.storing the results into redis as a single json
on every data request:
1.redis is queried for the data
2.if redis holds the data then its returned in response
3.the whole chain of "Main Data generation and storing" above is triggered then said data is returned
Main issue:
no fine search or diff managment
meaning dozens of GQL request everytime instead of merely tracking changes and requesting for the affected/new companies/repos
the rate limit for standart key can lead in inconsistent data display to client
**another major issue
Source readme needs to be optimized to avoid bad GQL calls.
The text was updated successfully, but these errors were encountered:
So currently the flow is
Main Data generation and storing
1.fetching a readme file.
2.parsing for repos (owner/repoName) and companies (name) data.
3.making GQL request for every company and repo using the above
4.storing the results into redis as a single json
on every data request:
1.redis is queried for the data
2.if redis holds the data then its returned in response
3.the whole chain of "Main Data generation and storing" above is triggered then said data is returned
Main issue:
no fine search or diff managment
meaning dozens of GQL request everytime instead of merely tracking changes and requesting for the affected/new companies/repos
the rate limit for standart key can lead in inconsistent data display to client
**another major issue
Source readme needs to be optimized to avoid bad GQL calls.
The text was updated successfully, but these errors were encountered: