-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added /v1/filing/{period_name} endpoint, corresponding repo and pytests #63
Conversation
Coverage reportClick to see where and how coverage changed
This report was generated by python-coverage-comment-action |
if missing_filing_leis: | ||
for lei in missing_filing_leis: | ||
filing_dao = await upsert_filing( | ||
session, | ||
FilingDTO( | ||
lei=lei, | ||
tasks=[], | ||
filing_period=filing_period[0].id, | ||
institution_snapshot_id="v1", # TODO: add function to get this from user-fi-api | ||
), | ||
) | ||
filings.append(filing_dao) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure how I feel about creating filings just by the act of user retrieving filings for the period; let's bring it during the dev touch point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thought the same, but the whole "keep the UI dumb" stuff. From all our discussions the UI is just going to retrieve filings and start doing actions. I'd definitely prefer if the UI had to push a button that says "Start filing" after selecting a specific period and that's our key to create and return one. If everyone is good with that then yay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok removed the creating of the 'missing' FilingDAO. Wrote #64 to refactor based on the discussion of lei+period_name PKs.
# Should be DTOs, but hey, it's python | ||
if copy_data["id"] is not None and "_sa_instance_state" in copy_data: | ||
del copy_data["_sa_instance_state"] | ||
new_dao = type(**copy_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's rename the T var from type to something else, type()
is a built-in python function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to table_obj
return new_dao | ||
|
||
|
||
async def query_helper(session: AsyncSession, type: T, column_name: str = None, value: Any = None) -> List[T]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same with the type
here, let's replace all the type
vars to something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to table_obj
result = deepcopy(result) | ||
if result: | ||
await populate_missing_tasks(session, result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a little pedantic, but the deepcopy
call can probably happen within the if statement.
Another thing, if we can, we'd like to avoid side effects, this could be as simple as the populate_missing_tasks
method returning the object. so result = await populate_missing....()
.
Last bit, just checking the method, the filing_tasks
retrieval can probably be cached since there shouldn't be frequent changes if any to that table, so you'd abstract out that portion of code into a separate function. I know sqlalchemy would cache the query anyway, but the async_lru would let us cache it without hitting the sqlalchemy code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated based on our conversation (moving the deepcopy and returning the new list), also pulled in async_lru. Ran through some tests over 200 iterations and without the decorator each fetch was taking on the order of 1.5 milliseconds. With the decorator, first fetch took that long but subsequent fetches took about 25 microseconds. So thanks for introducing me to lru caching, was not aware of it but definitely something to keep in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Closes #64 This is a decent size refactor so I wanted to get this out there. Once #63 is approved I'll need to merge it into here and redo a couple of things that were added in that story (query_helper updates, etc) I've put comments on certain changes defined below that I think will warrant conversation. This is was all based on going with the path /v1/filing/{lei}/{period_name}/ to get filing objects, post a new filing object, and then later submissions would be under /v1/filing/{lei}/{period_name}/submissions. Using LEI and period name (referred to as filing_period in the child tables) as a composite primary and foreign key drove the changes here.
Closes #53