-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Persian data process queries #400
Comments
hey @catreedle can I work with you on this? |
hey @VNW22, sure can! :) |
Keep in mind that we'll need a language filter for Farsi via "fa" as is done in the Hindustani queries :) Not 100% sure, but worth checking. |
I can do verbs and prepositions |
okay @VNW22, I'll start on nouns and adjectives :) |
hi @andrewtavis, how to decide/check if a language needs a filter? |
Check to see if the forms have more than one language on them :) So you can see for say a Hindustani lexeme that all the forms have hi and ur equivalents. If there's nothing like that for Persian, then maybe we don't have to worry 😊 |
I'll work on adverbs :) @VNW22 |
* add Persian query adjectives #400 * fix comment language qid * remove filter fa for persian query * Persian adverbs query * Minor query formatting --------- Co-authored-by: Andrew Tavis McAllister <[email protected]>
Closed via the PRs above :) Thanks all for the great work! |
Terms
Languages
Persian
Description
This issue would look into expanding the src/scribe_data/language_data_extraction/Persian files with as much data as are possible from the current data on Wikidata. We can use code for getting data from other languages, and from there we can check Persian data on Wikidata for what conjugations are available. We would likely need to filter for fa for Farsi. We can then expand the query with optional selections of certain forms as is done in other SPARQL queries. The query can be tried on the Wikidata Query Service UI during development :)
Data types to include:
Contribution
Happy to work on this and help anyone interested 😊
The text was updated successfully, but these errors were encountered: