Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regenerate models based off of R4 (4.0.1) #149

Merged
merged 1 commit into from
Jul 15, 2024
Merged

Regenerate models based off of R4 (4.0.1) #149

merged 1 commit into from
Jul 15, 2024

Conversation

mikix
Copy link
Contributor

@mikix mikix commented Jul 15, 2024

Also:

  • Fixed the pytest config so that tests can pass again.
  • Bumped the fhirclient version to 4.2.0.
  • Fixed up some docs.

Also:
- Fixed the pytest config so that tests can pass again.
- Bumped the fhirclient version to 4.2.0.
- Fixed up some docs.
Comment on lines -4 to +5
# Generated from FHIR 4.0.0-a53ec6ee1b (http://hl7.org/fhir/StructureDefinition/Account) on 2019-05-07.
# 2019, SMART Health IT.
# Generated from FHIR 4.0.1-9346c8cc45 (http://hl7.org/fhir/StructureDefinition/Account) on 2024-07-15.
# 2024, SMART Health IT.
Copy link
Contributor Author

@mikix mikix Jul 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK so all the files in fhirclient/models/ are "owned" by fhir-parser. And I changed the templates (see above) to make sure that all the actual models themselves would only see real 4.0.0 -> 4.0.1 changes and not artifacts of changes in the generator.

I've gone through all the models myself and confirmed that you only see:

  • Comment changes like this
  • Comment changes on some properties (usually enums) that list more values than previously.

The test files however, I didn't find as easy a way to avoid churn in. But it was less important, so I didn't try very hard. Sorry. But hopefully those don't need close review.

We're still not running the tests during any CI - I hope to fix that later, but wanted to get on a known-good model-generation baseline first. The old generated tests were wonky.

Copy link
Contributor Author

@mikix mikix Jul 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was so old that I figured I'd just delete it and start over fresh with a new CI test approach later.

I did confirm that pytest passes if you manually run it and provide the downloaded example files.

@mikix mikix marked this pull request as ready for review July 15, 2024 12:35
Copy link
Contributor

@dogversioning dogversioning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm taking your word at face value on the generated models - if you want me to side by side them against the FHIR spec, lemme know.

@mikix
Copy link
Contributor Author

mikix commented Jul 15, 2024

I'm taking your word at face value on the generated models - if you want me to side by side them against the FHIR spec, lemme know.

Yes I'm confident. I just went through them again with git diff -- fhirclient/models ':(exclude)*_tests.py'.

HOWEVER, I did forget to mention an actual change. This is in a few classes. A few times in the spec, a class will have a "choice" field that can be different types (like valueDateTime and valueBoolean). And sometimes that choice is "any element type" -- see the defaultValue[x] and fixed[x] fields in ElementDefinition.

In 4.0.1, the element type "Meta" got added to all those choice fields. So if you look at elementdefinition.py in this PR, you'll see an example of that. It's all just additive. And it does seem to be correct per the spec.

@mikix mikix merged commit 251ee81 into main Jul 15, 2024
@mikix mikix deleted the mikix/4.0.1 branch July 24, 2024 16:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants