to activate or cancel the filter
+# option.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+SEARCHENGINE = YES
+
+# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
+# implemented using a web server instead of a web client using JavaScript. There
+# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
+# setting. When disabled, doxygen will generate a PHP script for searching and
+# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
+# and searching needs to be provided by external tools. See the section
+# "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SERVER_BASED_SEARCH = NO
+
+# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
+# script for searching. Instead the search results are written to an XML file
+# which needs to be processed by an external indexer. Doxygen will invoke an
+# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
+# search results.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see:
+# https://xapian.org/).
+#
+# See the section "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH = NO
+
+# The SEARCHENGINE_URL should point to a search engine hosted by a web server
+# which will return the search results when EXTERNAL_SEARCH is enabled.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see:
+# https://xapian.org/). See the section "External Indexing and Searching" for
+# details.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHENGINE_URL =
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
+# search data is written to a file for indexing by an external tool. With the
+# SEARCHDATA_FILE tag the name of this file can be specified.
+# The default file is: searchdata.xml.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHDATA_FILE = searchdata.xml
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
+# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
+# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
+# projects and redirect the results back to the right project.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH_ID =
+
+# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
+# projects other than the one defined by this configuration file, but that are
+# all added to the same external search index. Each project needs to have a
+# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
+# to a relative location where the documentation can be found. The format is:
+# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTRA_SEARCH_MAPPINGS =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
+# The default value is: YES.
+
+GENERATE_LATEX = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_OUTPUT = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked.
+#
+# Note that when not enabling USE_PDFLATEX the default is latex when enabling
+# USE_PDFLATEX the default is pdflatex and when in the later case latex is
+# chosen this is overwritten by pdflatex. For specific output languages the
+# default can have been set differently, this depends on the implementation of
+# the output language.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_CMD_NAME =
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
+# index for LaTeX.
+# Note: This tag is used in the Makefile / make.bat.
+# See also: LATEX_MAKEINDEX_CMD for the part in the generated output file
+# (.tex).
+# The default file is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+MAKEINDEX_CMD_NAME = makeindex
+
+# The LATEX_MAKEINDEX_CMD tag can be used to specify the command name to
+# generate index for LaTeX. In case there is no backslash (\) as first character
+# it will be automatically added in the LaTeX code.
+# Note: This tag is used in the generated output file (.tex).
+# See also: MAKEINDEX_CMD_NAME for the part in the Makefile / make.bat.
+# The default value is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_MAKEINDEX_CMD = makeindex
+
+# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+COMPACT_LATEX = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used by the
+# printer.
+# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
+# 14 inches) and executive (7.25 x 10.5 inches).
+# The default value is: a4.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PAPER_TYPE = a4
+
+# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
+# that should be included in the LaTeX output. The package can be specified just
+# by its name or with the correct syntax as to be used with the LaTeX
+# \usepackage command. To get the times font for instance you can specify :
+# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
+# To use the option intlimits with the amsmath package you can specify:
+# EXTRA_PACKAGES=[intlimits]{amsmath}
+# If left blank no extra packages will be included.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+EXTRA_PACKAGES =
+
+# The LATEX_HEADER tag can be used to specify a user-defined LaTeX header for
+# the generated LaTeX document. The header should contain everything until the
+# first chapter. If it is left blank doxygen will generate a standard header. It
+# is highly recommended to start with a default header using
+# doxygen -w latex new_header.tex new_footer.tex new_stylesheet.sty
+# and then modify the file new_header.tex. See also section "Doxygen usage" for
+# information on how to generate the default header that doxygen normally uses.
+#
+# Note: Only use a user-defined header if you know what you are doing!
+# Note: The header is subject to change so you typically have to regenerate the
+# default header when upgrading to a newer version of doxygen. The following
+# commands have a special meaning inside the header (and footer): For a
+# description of the possible markers and block names see the documentation.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HEADER =
+
+# The LATEX_FOOTER tag can be used to specify a user-defined LaTeX footer for
+# the generated LaTeX document. The footer should contain everything after the
+# last chapter. If it is left blank doxygen will generate a standard footer. See
+# LATEX_HEADER for more information on how to generate a default footer and what
+# special commands can be used inside the footer. See also section "Doxygen
+# usage" for information on how to generate the default footer that doxygen
+# normally uses. Note: Only use a user-defined footer if you know what you are
+# doing!
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_FOOTER =
+
+# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
+# LaTeX style sheets that are included after the standard style sheets created
+# by doxygen. Using this option one can overrule certain style aspects. Doxygen
+# will copy the style sheet files to the output directory.
+# Note: The order of the extra style sheet files is of importance (e.g. the last
+# style sheet in the list overrules the setting of the previous ones in the
+# list).
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_STYLESHEET =
+
+# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the LATEX_OUTPUT output
+# directory. Note that the files will be copied as-is; there are no commands or
+# markers available.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_FILES =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
+# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
+# contain links (just like the HTML output) instead of page references. This
+# makes the output suitable for online browsing using a PDF viewer.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PDF_HYPERLINKS = YES
+
+# If the USE_PDFLATEX tag is set to YES, doxygen will use the engine as
+# specified with LATEX_CMD_NAME to generate the PDF file directly from the LaTeX
+# files. Set this option to YES, to get a higher quality PDF documentation.
+#
+# See also section LATEX_CMD_NAME for selecting the engine.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+USE_PDFLATEX = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
+# command to the generated LaTeX files. This will instruct LaTeX to keep running
+# if errors occur, instead of asking the user for help.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BATCHMODE = NO
+
+# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
+# index chapters (such as File Index, Compound Index, etc.) in the output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HIDE_INDICES = NO
+
+# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
+# bibliography, e.g. plainnat, or ieeetr. See
+# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
+# The default value is: plain.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BIB_STYLE = plain
+
+# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated
+# page will contain the date and time when the page was generated. Setting this
+# to NO can help when comparing the output of multiple runs.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_TIMESTAMP = NO
+
+# The LATEX_EMOJI_DIRECTORY tag is used to specify the (relative or absolute)
+# path from which the emoji images will be read. If a relative path is entered,
+# it will be relative to the LATEX_OUTPUT directory. If left blank the
+# LATEX_OUTPUT directory will be used.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EMOJI_DIRECTORY =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
+# RTF output is optimized for Word 97 and may not look too pretty with other RTF
+# readers/editors.
+# The default value is: NO.
+
+GENERATE_RTF = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: rtf.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_OUTPUT = rtf
+
+# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+COMPACT_RTF = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
+# contain hyperlink fields. The RTF file will contain links (just like the HTML
+# output) instead of page references. This makes the output suitable for online
+# browsing using Word or some other Word compatible readers that support those
+# fields.
+#
+# Note: WordPad (write) and others do not support links.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_HYPERLINKS = NO
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's
+# configuration file, i.e. a series of assignments. You only have to provide
+# replacements, missing definitions are set to their default value.
+#
+# See also section "Doxygen usage" for information on how to generate the
+# default style sheet that doxygen normally uses.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_STYLESHEET_FILE =
+
+# Set optional variables used in the generation of an RTF document. Syntax is
+# similar to doxygen's configuration file. A template extensions file can be
+# generated using doxygen -e rtf extensionFile.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_EXTENSIONS_FILE =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
+# classes and files.
+# The default value is: NO.
+
+GENERATE_MAN = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it. A directory man3 will be created inside the directory specified by
+# MAN_OUTPUT.
+# The default directory is: man.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_OUTPUT = man
+
+# The MAN_EXTENSION tag determines the extension that is added to the generated
+# man pages. In case the manual section does not start with a number, the number
+# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
+# optional.
+# The default value is: .3.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_EXTENSION = .3
+
+# The MAN_SUBDIR tag determines the name of the directory created within
+# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
+# MAN_EXTENSION with the initial . removed.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_SUBDIR =
+
+# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
+# will generate one additional man file for each entity documented in the real
+# man page(s). These additional files only source the real man page, but without
+# them the man command would be unable to find the correct page.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_LINKS = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
+# captures the structure of the code including all documentation.
+# The default value is: NO.
+
+GENERATE_XML = NO
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: xml.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_OUTPUT = xml
+
+# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
+# listings (including syntax highlighting and cross-referencing information) to
+# the XML output. Note that enabling this will significantly increase the size
+# of the XML output.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_PROGRAMLISTING = YES
+
+# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include
+# namespace members in file scope as well, matching the HTML output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_NS_MEMB_FILE_SCOPE = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the DOCBOOK output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
+# that can be used to generate PDF.
+# The default value is: NO.
+
+GENERATE_DOCBOOK = NO
+
+# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
+# front of it.
+# The default directory is: docbook.
+# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
+
+DOCBOOK_OUTPUT = docbook
+
+#---------------------------------------------------------------------------
+# Configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
+# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
+# the structure of the code including all documentation. Note that this feature
+# is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_AUTOGEN_DEF = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to Sqlite3 output
+#---------------------------------------------------------------------------
+
+#---------------------------------------------------------------------------
+# Configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
+# file that captures the structure of the code including all documentation.
+#
+# Note that this feature is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_PERLMOD = NO
+
+# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
+# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
+# output from the Perl module output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_LATEX = NO
+
+# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
+# formatted so it can be parsed by a human reader. This is useful if you want to
+# understand what is going on. On the other hand, if this tag is set to NO, the
+# size of the Perl module output will be much smaller and Perl will parse it
+# just the same.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_PRETTY = YES
+
+# The names of the make variables in the generated doxyrules.make file are
+# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
+# so different doxyrules.make files included by the same Makefile don't
+# overwrite each other's variables.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
+# C-preprocessor directives found in the sources and include files.
+# The default value is: YES.
+
+ENABLE_PREPROCESSING = YES
+
+# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
+# in the source code. If set to NO, only conditional compilation will be
+# performed. Macro expansion can be done in a controlled way by setting
+# EXPAND_ONLY_PREDEF to YES.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+MACRO_EXPANSION = NO
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
+# the macro expansion is limited to the macros specified with the PREDEFINED and
+# EXPAND_AS_DEFINED tags.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_ONLY_PREDEF = NO
+
+# If the SEARCH_INCLUDES tag is set to YES, the include files in the
+# INCLUDE_PATH will be searched if a #include is found.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SEARCH_INCLUDES = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by the
+# preprocessor. Note that the INCLUDE_PATH is not recursive, so the setting of
+# RECURSIVE has no effect here.
+# This tag requires that the tag SEARCH_INCLUDES is set to YES.
+
+INCLUDE_PATH =
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will be
+# used.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+INCLUDE_FILE_PATTERNS =
+
+# The PREDEFINED tag can be used to specify one or more macro names that are
+# defined before the preprocessor is started (similar to the -D option of e.g.
+# gcc). The argument of the tag is a list of macros of the form: name or
+# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
+# is assumed. To prevent a macro definition from being undefined via #undef or
+# recursively expanded use the := operator instead of the = operator.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+PREDEFINED =
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
+# tag can be used to specify a list of macro names that should be expanded. The
+# macro definition that is found in the sources will be used. Use the PREDEFINED
+# tag if you want to use a different macro definition that overrules the
+# definition found in the source code.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_AS_DEFINED =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
+# remove all references to function-like macros that are alone on a line, have
+# an all uppercase name, and do not end with a semicolon. Such function macros
+# are typically used for boiler-plate code, and will confuse the parser if not
+# removed.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SKIP_FUNCTION_MACROS = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES tag can be used to specify one or more tag files. For each tag
+# file the location of the external documentation should be added. The format of
+# a tag file without this location is as follows:
+# TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+# TAGFILES = file1=loc1 "file2 = loc2" ...
+# where loc1 and loc2 can be relative or absolute paths or URLs. See the
+# section "Linking to external documentation" for more information about the use
+# of tag files.
+# Note: Each tag file must have a unique name (where the name does NOT include
+# the path). If a tag file is not located in the directory in which doxygen is
+# run, you must also specify the path to the tagfile here.
+
+TAGFILES =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
+# tag file that is based on the input files it reads. See section "Linking to
+# external documentation" for more information about the usage of tag files.
+
+GENERATE_TAGFILE =
+
+# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
+# the class index. If set to NO, only the inherited external classes will be
+# listed.
+# The default value is: NO.
+
+ALLEXTERNALS = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will be
+# listed.
+# The default value is: YES.
+
+EXTERNAL_GROUPS = YES
+
+# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
+# the related pages index. If set to NO, only the current project's pages will
+# be listed.
+# The default value is: YES.
+
+EXTERNAL_PAGES = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# You can include diagrams made with dia in doxygen documentation. Doxygen will
+# then run dia to produce the diagram and insert it in the documentation. The
+# DIA_PATH tag allows you to specify the directory where the dia binary resides.
+# If left empty dia is assumed to be found in the default search path.
+
+DIA_PATH =
+
+# If set to YES the inheritance and collaboration graphs will hide inheritance
+# and usage relations if the target is undocumented or is not a class.
+# The default value is: YES.
+
+HIDE_UNDOC_RELATIONS = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz (see:
+# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
+# Bell Labs. The other options in this section have no effect if this option is
+# set to NO
+# The default value is: NO.
+
+HAVE_DOT = YES
+
+# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
+# to run in parallel. When set to 0 doxygen will base this on the number of
+# processors available in the system. You can set it explicitly to a value
+# larger than 0 to get control over the balance between CPU load and processing
+# speed.
+# Minimum value: 0, maximum value: 32, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_NUM_THREADS = 0
+
+# DOT_COMMON_ATTR is common attributes for nodes, edges and labels of
+# subgraphs. When you want a differently looking font in the dot files that
+# doxygen generates you can specify fontname, fontcolor and fontsize attributes.
+# For details please see Node,
+# Edge and Graph Attributes specification You need to make sure dot is able
+# to find the font, which can be done by putting it in a standard location or by
+# setting the DOTFONTPATH environment variable or by setting DOT_FONTPATH to the
+# directory containing the font. Default graphviz fontsize is 14.
+# The default value is: fontname=Helvetica,fontsize=10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_COMMON_ATTR = "fontname=Helvetica,fontsize=10"
+
+# DOT_EDGE_ATTR is concatenated with DOT_COMMON_ATTR. For elegant style you can
+# add 'arrowhead=open, arrowtail=open, arrowsize=0.5'. Complete documentation about
+# arrows shapes.
+# The default value is: labelfontname=Helvetica,labelfontsize=10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_EDGE_ATTR = "labelfontname=Helvetica,labelfontsize=10"
+
+# DOT_NODE_ATTR is concatenated with DOT_COMMON_ATTR. For view without boxes
+# around nodes set 'shape=plain' or 'shape=plaintext' Shapes specification
+# The default value is: shape=box,height=0.2,width=0.4.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_NODE_ATTR = "shape=box,height=0.2,width=0.4"
+
+# You can set the path where dot can find font specified with fontname in
+# DOT_COMMON_ATTR and others dot attributes.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTPATH =
+
+# If the CLASS_GRAPH tag is set to YES (or GRAPH) then doxygen will generate a
+# graph for each documented class showing the direct and indirect inheritance
+# relations. In case HAVE_DOT is set as well dot will be used to draw the graph,
+# otherwise the built-in generator will be used. If the CLASS_GRAPH tag is set
+# to TEXT the direct and indirect inheritance relations will be shown as texts /
+# links.
+# Possible values are: NO, YES, TEXT and GRAPH.
+# The default value is: YES.
+
+CLASS_GRAPH = YES
+
+# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
+# graph for each documented class showing the direct and indirect implementation
+# dependencies (inheritance, containment, and class references variables) of the
+# class with other documented classes.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+COLLABORATION_GRAPH = YES
+
+# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
+# groups, showing the direct groups dependencies. See also the chapter Grouping
+# in the manual.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GROUP_GRAPHS = YES
+
+# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LOOK = YES
+
+# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
+# class node. If there are many fields or methods and many nodes the graph may
+# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
+# number of items for each type to make the size more manageable. Set this to 0
+# for no limit. Note that the threshold may be exceeded by 50% before the limit
+# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
+# but if the number exceeds 15, the total amount of fields shown is limited to
+# 10.
+# Minimum value: 0, maximum value: 100, default value: 10.
+# This tag requires that the tag UML_LOOK is set to YES.
+
+UML_LIMIT_NUM_FIELDS = 10
+
+# If the DOT_UML_DETAILS tag is set to NO, doxygen will show attributes and
+# methods without types and arguments in the UML graphs. If the DOT_UML_DETAILS
+# tag is set to YES, doxygen will add type and arguments for attributes and
+# methods in the UML graphs. If the DOT_UML_DETAILS tag is set to NONE, doxygen
+# will not generate fields with class member information in the UML graphs. The
+# class diagrams will look similar to the default class diagrams but using UML
+# notation for the relationships.
+# Possible values are: NO, YES and NONE.
+# The default value is: NO.
+# This tag requires that the tag UML_LOOK is set to YES.
+
+DOT_UML_DETAILS = NO
+
+# The DOT_WRAP_THRESHOLD tag can be used to set the maximum number of characters
+# to display on a single line. If the actual line length exceeds this threshold
+# significantly it will wrapped across multiple lines. Some heuristics are apply
+# to avoid ugly line breaks.
+# Minimum value: 0, maximum value: 1000, default value: 17.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_WRAP_THRESHOLD = 17
+
+# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
+# collaboration graphs will show the relations between templates and their
+# instances.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+TEMPLATE_RELATIONS = NO
+
+# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
+# YES then doxygen will generate a graph for each documented file showing the
+# direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDE_GRAPH = YES
+
+# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
+# set to YES then doxygen will generate a graph for each documented file showing
+# the direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDED_BY_GRAPH = YES
+
+# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable call graphs for selected
+# functions only using the \callgraph command. Disabling a call graph can be
+# accomplished by means of the command \hidecallgraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALL_GRAPH = YES
+
+# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable caller graphs for selected
+# functions only using the \callergraph command. Disabling a caller graph can be
+# accomplished by means of the command \hidecallergraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALLER_GRAPH = YES
+
+# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
+# hierarchy of all classes instead of a textual one.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GRAPHICAL_HIERARCHY = YES
+
+# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
+# dependencies a directory has on other directories in a graphical way. The
+# dependency relations are determined by the #include relations between the
+# files in the directories.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DIRECTORY_GRAPH = YES
+
+# The DIR_GRAPH_MAX_DEPTH tag can be used to limit the maximum number of levels
+# of child directories generated in directory dependency graphs by dot.
+# Minimum value: 1, maximum value: 25, default value: 1.
+# This tag requires that the tag DIRECTORY_GRAPH is set to YES.
+
+DIR_GRAPH_MAX_DEPTH = 1
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. For an explanation of the image formats see the section
+# output formats in the documentation of the dot tool (Graphviz (see:
+# http://www.graphviz.org/)).
+# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
+# to make the SVG files visible in IE 9+ (other browsers do not have this
+# requirement).
+# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
+# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
+# png:gdiplus:gdiplus.
+# The default value is: png.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_IMAGE_FORMAT = png
+
+# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
+# enable generation of interactive SVG images that allow zooming and panning.
+#
+# Note that this requires a modern browser other than Internet Explorer. Tested
+# and working are Firefox, Chrome, Safari, and Opera.
+# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
+# the SVG files visible. Older versions of IE do not have SVG support.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INTERACTIVE_SVG = NO
+
+# The DOT_PATH tag can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_PATH =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the \dotfile
+# command).
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOTFILE_DIRS =
+
+# The MSCFILE_DIRS tag can be used to specify one or more directories that
+# contain msc files that are included in the documentation (see the \mscfile
+# command).
+
+MSCFILE_DIRS =
+
+# The DIAFILE_DIRS tag can be used to specify one or more directories that
+# contain dia files that are included in the documentation (see the \diafile
+# command).
+
+DIAFILE_DIRS =
+
+# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
+# path where java can find the plantuml.jar file or to the filename of jar file
+# to be used. If left blank, it is assumed PlantUML is not used or called during
+# a preprocessing step. Doxygen will generate a warning when it encounters a
+# \startuml command in this case and will not generate output for the diagram.
+
+PLANTUML_JAR_PATH =
+
+# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a
+# configuration file for plantuml.
+
+PLANTUML_CFG_FILE =
+
+# When using plantuml, the specified paths are searched for files specified by
+# the !include statement in a plantuml block.
+
+PLANTUML_INCLUDE_PATH =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
+# that will be shown in the graph. If the number of nodes in a graph becomes
+# larger than this value, doxygen will truncate the graph, which is visualized
+# by representing a node as a red box. Note that doxygen if the number of direct
+# children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
+# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+# Minimum value: 0, maximum value: 10000, default value: 50.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_GRAPH_MAX_NODES = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
+# generated by dot. A depth value of 3 means that only nodes reachable from the
+# root by following a path via at most 3 edges will be shown. Nodes that lay
+# further from the root node will be omitted. Note that setting this option to 1
+# or 2 may greatly reduce the computation time needed for large code bases. Also
+# note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+# Minimum value: 0, maximum value: 1000, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+MAX_DOT_GRAPH_DEPTH = 0
+
+# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10) support
+# this, this feature is disabled by default.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_MULTI_TARGETS = NO
+
+# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
+# explaining the meaning of the various boxes and arrows in the dot generated
+# graphs.
+# Note: This tag requires that UML_LOOK isn't set, i.e. the doxygen internal
+# graphical representation for inheritance and collaboration diagrams is used.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GENERATE_LEGEND = YES
+
+# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate
+# files that are used to generate the various graphs.
+#
+# Note: This setting is not only used for dot files but also for msc temporary
+# files.
+# The default value is: YES.
+
+DOT_CLEANUP = YES
diff --git a/LICENSE/index.html b/LICENSE/index.html
new file mode 100644
index 000000000..dc3bc242c
--- /dev/null
+++ b/LICENSE/index.html
@@ -0,0 +1,2084 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ License - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+GNU AFFERO GENERAL PUBLIC LICENSE
+Version 3, 19 November 2007
+Copyright (C) 2007 Free Software Foundation, Inc.
+https://fsf.org/
+Everyone is permitted to copy and distribute verbatim copies of this
+license document, but changing it is not allowed.
+Preamble
+The GNU Affero General Public License is a free, copyleft license for
+software and other kinds of works, specifically designed to ensure
+cooperation with the community in the case of network server software.
+The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+our General Public Licenses are intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains
+free software for all its users.
+When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+Developers that use our General Public Licenses protect your rights
+with two steps: (1) assert copyright on the software, and (2) offer
+you this License which gives you legal permission to copy, distribute
+and/or modify the software.
+A secondary benefit of defending all users' freedom is that
+improvements made in alternate versions of the program, if they
+receive widespread use, become available for other developers to
+incorporate. Many developers of free software are heartened and
+encouraged by the resulting cooperation. However, in the case of
+software used on network servers, this result may fail to come about.
+The GNU General Public License permits making a modified version and
+letting the public access it on a server without ever releasing its
+source code to the public.
+The GNU Affero General Public License is designed specifically to
+ensure that, in such cases, the modified source code becomes available
+to the community. It requires the operator of a network server to
+provide the source code of the modified version running there to the
+users of that server. Therefore, public use of a modified version, on
+a publicly accessible server, gives the public access to the source
+code of the modified version.
+An older license, called the Affero General Public License and
+published by Affero, was designed to accomplish similar goals. This is
+a different license, not a version of the Affero GPL, but Affero has
+released a new version of the Affero GPL which permits relicensing
+under this license.
+The precise terms and conditions for copying, distribution and
+modification follow.
+TERMS AND CONDITIONS
+0. Definitions
+"This License" refers to version 3 of the GNU Affero General Public
+License.
+"Copyright" also means copyright-like laws that apply to other kinds
+of works, such as semiconductor masks.
+"The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of
+an exact copy. The resulting work is called a "modified version" of
+the earlier work or a work "based on" the earlier work.
+A "covered work" means either the unmodified Program or a work based
+on the Program.
+To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user
+through a computer network, with no transfer of a copy, is not
+conveying.
+An interactive user interface displays "Appropriate Legal Notices" to
+the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+1. Source Code
+The "source code" for a work means the preferred form of the work for
+making modifications to it. "Object code" means any non-source form of
+a work.
+A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+The Corresponding Source need not include anything that users can
+regenerate automatically from other parts of the Corresponding Source.
+The Corresponding Source for a work in source code form is that same
+work.
+2. Basic Permissions
+All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+You may make, run and propagate covered works that you do not convey,
+without conditions so long as your license otherwise remains in force.
+You may convey covered works to others for the sole purpose of having
+them make modifications exclusively for you, or provide you with
+facilities for running those works, provided that you comply with the
+terms of this License in conveying all material for which you do not
+control copyright. Those thus making or running the covered works for
+you must do so exclusively on your behalf, under your direction and
+control, on terms that prohibit them from making any copies of your
+copyrighted material outside their relationship with you.
+Conveying under any other circumstances is permitted solely under the
+conditions stated below. Sublicensing is not allowed; section 10 makes
+it unnecessary.
+3. Protecting Users' Legal Rights From Anti-Circumvention Law
+No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such
+circumvention is effected by exercising rights under this License with
+respect to the covered work, and you disclaim any intention to limit
+operation or modification of the work as a means of enforcing, against
+the work's users, your or third parties' legal rights to forbid
+circumvention of technological measures.
+4. Conveying Verbatim Copies
+You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+5. Conveying Modified Source Versions
+You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these
+conditions:
+
+a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under
+ section 7. This requirement modifies the requirement in section 4
+ to "keep intact all notices".
+c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+You may convey a covered work in object code form under the terms of
+sections 4 and 5, provided that you also convey the machine-readable
+Corresponding Source under the terms of this License, in one of these
+ways:
+
+a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the Corresponding
+ Source from a network server at no charge.
+c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+e) Convey the object code using peer-to-peer transmission,
+ provided you inform other peers where the object code and
+ Corresponding Source of the work are being offered to the general
+ public at no charge under subsection 6d.
+
+A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal,
+family, or household purposes, or (2) anything designed or sold for
+incorporation into a dwelling. In determining whether a product is a
+consumer product, doubtful cases shall be resolved in favor of
+coverage. For a particular product received by a particular user,
+"normally used" refers to a typical or common use of that class of
+product, regardless of the status of the particular user or of the way
+in which the particular user actually uses, or expects or is expected
+to use, the product. A product is a consumer product regardless of
+whether the product has substantial commercial, industrial or
+non-consumer uses, unless such uses represent the only significant
+mode of use of the product.
+"Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to
+install and execute modified versions of a covered work in that User
+Product from a modified version of its Corresponding Source. The
+information must suffice to ensure that the continued functioning of
+the modified object code is in no case prevented or interfered with
+solely because modification has been made.
+If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or
+updates for a work that has been modified or installed by the
+recipient, or for the User Product in which it has been modified or
+installed. Access to a network may be denied when the modification
+itself materially and adversely affects the operation of the network
+or violates the rules and protocols for communication across the
+network.
+Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+7. Additional Terms
+"Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders
+of that material) supplement the terms of this License with terms:
+
+a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+c) Prohibiting misrepresentation of the origin of that material,
+ or requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+d) Limiting the use for publicity purposes of names of licensors
+ or authors of the material; or
+e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions
+ of it) with contractual assumptions of liability to the recipient,
+ for any liability that these contractual assumptions directly
+ impose on those licensors and authors.
+
+All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions; the
+above requirements apply either way.
+8. Termination
+You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+However, if you cease all violation of this License, then your license
+from a particular copyright holder is reinstated (a) provisionally,
+unless and until the copyright holder explicitly and finally
+terminates your license, and (b) permanently, if the copyright holder
+fails to notify you of the violation by some reasonable means prior to
+60 days after the cessation.
+Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+9. Acceptance Not Required for Having Copies
+You are not required to accept this License in order to receive or run
+a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+10. Automatic Licensing of Downstream Recipients
+Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+11. Patents
+A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+A contributor's "essential patent claims" are all patent claims owned
+or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+A patent license is "discriminatory" if it does not include within the
+scope of its coverage, prohibits the exercise of, or is conditioned on
+the non-exercise of one or more of the rights that are specifically
+granted under this License. You may not convey a covered work if you
+are a party to an arrangement with a third party that is in the
+business of distributing software, under which you make payment to the
+third party based on the extent of your activity of conveying the
+work, and under which the third party grants, to any of the parties
+who would receive the covered work from you, a discriminatory patent
+license (a) in connection with copies of the covered work conveyed by
+you (or copies made from those copies), or (b) primarily for and in
+connection with specific products or compilations that contain the
+covered work, unless you entered into that arrangement, or that patent
+license was granted, prior to 28 March 2007.
+Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+12. No Surrender of Others' Freedom
+If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under
+this License and any other pertinent obligations, then as a
+consequence you may not convey it at all. For example, if you agree to
+terms that obligate you to collect a royalty for further conveying
+from those to whom you convey the Program, the only way you could
+satisfy both those terms and this License would be to refrain entirely
+from conveying the Program.
+13. Remote Network Interaction; Use with the GNU General Public License
+Notwithstanding any other provision of this License, if you modify the
+Program, your modified version must prominently offer all users
+interacting with it remotely through a computer network (if your
+version supports such interaction) an opportunity to receive the
+Corresponding Source of your version by providing access to the
+Corresponding Source from a network server at no charge, through some
+standard or customary means of facilitating copying of software. This
+Corresponding Source shall include the Corresponding Source for any
+work covered by version 3 of the GNU General Public License that is
+incorporated pursuant to the following paragraph.
+Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the work with which it is combined will remain governed by version
+3 of the GNU General Public License.
+14. Revised Versions of this License
+The Free Software Foundation may publish revised and/or new versions
+of the GNU Affero General Public License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.
+Each version is given a distinguishing version number. If the Program
+specifies that a certain numbered version of the GNU Affero General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU Affero General Public License, you may choose any version ever
+published by the Free Software Foundation.
+If the Program specifies that a proxy can decide which future versions
+of the GNU Affero General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+15. Disclaimer of Warranty
+THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
+WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
+PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
+DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
+CORRECTION.
+16. Limitation of Liability
+IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR
+CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES
+ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
+NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
+LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
+TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
+PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+17. Interpretation of Sections 15 and 16
+If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+END OF TERMS AND CONDITIONS
+How to Apply These Terms to Your New Programs
+If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these
+terms.
+To do so, attach the following notices to the program. It is safest to
+attach them to the start of each source file to most effectively state
+the exclusion of warranty; and each file should have at least the
+"copyright" line and a pointer to where the full notice is found.
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU Affero General Public License as
+ published by the Free Software Foundation, either version 3 of the
+ License, or (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU Affero General Public License for more details.
+
+ You should have received a copy of the GNU Affero General Public License
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper
+mail.
+If your software can interact with users remotely through a computer
+network, you should also make sure that it provides a way for users to
+get its source. For example, if your program is a web application, its
+interface could display a "Source" link that leads users to an archive
+of the code. There are many ways you could offer source, and different
+solutions will be better for different programs; see section 13 for
+the specific requirements.
+You should also get your employer (if you work as a programmer) or
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. For more information on this, and how to apply and follow
+the GNU AGPL, see https://www.gnu.org/licenses/ .
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/About/index.html b/about/About/index.html
new file mode 100644
index 000000000..0289b5488
--- /dev/null
+++ b/about/About/index.html
@@ -0,0 +1,1339 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ About - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ About
+
+
+
+OSM-Fieldwork Project
+Osm_Fieldwork is a project that aims to simplify the process of processing data collected using ODK into OpenStreetMap format. It consists of several utility programs that automate different parts of the data flow. These include creating satellite imagery basemaps and data extracts from OpenStreetMap so they can be used with ODK Collect. It is maintained by the Humanitarian OpenStreetMap Team (HOT) and designed to work with ODK Collect , an Android app for data collection, and ODK Central , a web-based platform for managing and visualizing data.
+osm_fieldwork
+This program converts the data collected from ODK Collect into the proper OpenStreetMap tagging schema. The conversion is controlled by a YAML file, which makes it easy to modify for other projects. The output is an OSM XML formatted file for JOSM. However, it is important to note that no converted data should ever be uploaded to OSM without first validating the conversion in JOSM. To do high-quality conversion from ODK to OSM, it's best to use the XLSForm library as template, as everything is designed to work together.
+Osm_Fieldwork includes the following utilities:
+
+make_data_extract.py
: extracts OpenStreetMap data within a given boundary and category (e.g., buildings, amenities) using Overpass Turbo or a Postgres database.
+CSVDump.py
: converts a CSV file downloaded from ODK Central to OSM XML format.
+odk2csv.py
: converts an ODK XML instance file to the same CSV format used by ODK Central.
+ODKDump.py
: extracts data from an ODK Collect instance XML file and converts it to OSM XML format.
+ODKForm.py
: parses an ODK XML form file and extracts its fields and data types.
+ODKInstance.py
: parses an ODK Collect instance XML file and extracts its fields and data values.
+
+Osm_Fieldwork also includes support modules, such as convert.py for processing YAML config files and osmfile.py for writing OSM XML output files.
+Installation
+To install osm-fieldwork, you can use pip. Here are two options:
+
+
+Directly from the main branch:
+ pip install git+https://github.com/hotosm/osm-fieldwork.git
+-OR-
+
+
+Latest on PyPi:
+ pip install Osm-Fieldwork
+
+
+
+Note: installation requires GDAL >3.4 installed on your system.
+
+Usage
+Each utility has its own command-line interface, with various options and arguments. You can find detailed instructions on how to use each utility by running it with the -h or --help option.
+For example, to extract OSM data within a boundary polygon from Overpass Turbo, run:
+./make_data_extract.py --overpass --boundary mycounty.geojson
+This will create a GeoJSON file with the extracted data.
+To convert a CSV file from ODK Central to OSM XML format, run:
+./CSVDump.py -i data.csv
+This will generate two output files - one OSM XML of public data, and the other a GeoJson file with all the data.
+Contributing
+Osm_Fieldwork is an open-source project, and contributions are always welcome! If you want to contribute, please read the Contribution Guidelines and Code of Conduct first.
+License
+Osm_Fieldwork is released under the AGPLv3 .
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/CSVDump/index.html b/about/CSVDump/index.html
new file mode 100644
index 000000000..ca889353d
--- /dev/null
+++ b/about/CSVDump/index.html
@@ -0,0 +1,1248 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ CSVDump - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ CSVDump
+
+CSVDump.py
+CSVDump.py is a Python script that converts a CSV file downloaded from
+ODK Central to OpenStreetMap (OSM) XML format. The tool can be useful
+for users who want to work with OpenStreetMap data and want to convert
+ODK Central data into a compatible format.
+options:
+ -h, --help - show this help message and exit
+ -v, --verbose - verbose output
+ -i CSVFILE, --infile CSVFILE - Specifies the path and filename of the input CSV file downloaded from ODK Central. This option is required for the program to run.
+
+Examples
+To convert a CSV file named "survey_data.csv" located in the current
+working directory, the following command can be used:
+[path]/CSVDump.py -i survey_data.csv
+
+To enable verbose output during the conversion process, the following
+command can be used:
+[path]/CSVDump.py -i survey_data.csv -v
+
+
+CSVDump.py expects an input file in CSV format downloaded from ODK
+Central. The CSV file should have a header row with column names that
+correspond to the survey questions. Each row in the CSV file should
+contain a response to the survey questions, with each column
+representing a different question.
+
+The output of CSVDump.py is an OSM XML file that can be used with
+OpenStreetMap data tools and services. The converted OSM XML file will
+have tags for each survey question in the CSV file, as well as any
+metadata associated with the survey. The format of the OSM XML file
+generated by CSVDump.py is compatible with other OpenStreetMap data
+tools and services.
+Limitations
+
+CSVDump.py only supports CSV files downloaded from ODK
+ Central. Other CSV files may not be compatible with the tool.
+The tool only supports simple data types such as strings, numbers,
+ and dates. Complex data types such as arrays and nested structures
+ are not supported.
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/Contribution/index.html b/about/Contribution/index.html
new file mode 100644
index 000000000..b6bf6316c
--- /dev/null
+++ b/about/Contribution/index.html
@@ -0,0 +1,1227 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Contribution - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Contribution
+
+:hugs: Welcome
+ First off, I'm really glad you're reading this, because we need volunteer developers to help improve the Osm_Fieldwork and it's integration with FMTM!
+We welcome and encourage contributors of all skill levels and we are committed to making sure your participation is inclusive, enjoyable and rewarding. If you have never contributed to an open source project before, we are a good place to start and will make sure you are supported every step of the way. If you have any questions, please ask!
+There are many ways to contribute to this repo:
+Testing
+Adding test cases, or simply testing out existing functionality.
+Report bugs and suggest improvements
+The issue queue is the best way to get started. There are issue templates for BUGs and FEATURES that you can use, or you can create your own. Once you have submitted an issue, it will be assigned one label out of the following label categories . If you are wondering where to start, you can filter by the GoodFirstIssue label.
+Code contributions
+Create pull requests (PRs) for changes that you think are needed. We would really appreciate your help!
+Useful Resources for Contribution
+
+ Thank you
+Thank you very much in advance for your contributions!! Please ensure you refer to our Code of Conduct .
+If you've read the guidelines, but you are still not sure how to contribute on Github, please reach out to us via our Slack #geospatial-tech-and-innovation.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/FAQ/index.html b/about/FAQ/index.html
new file mode 100644
index 000000000..e10c65db1
--- /dev/null
+++ b/about/FAQ/index.html
@@ -0,0 +1,1295 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ FAQ - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+❓ Frequently Asked Questions ❓
+Q: What is OSM Fieldwork?
+A: OSM Fieldwork is a project to support field data collection using
+ODK and
+OpenStreetMap . The primary
+functionality is the ability to convert data collected with ODK
+Collect into OSM XML. In addition it can also create satellite imagery
+basemaps for ODK Collect and
+Osmand . In addition there is a library of
+XLSForms focused on humanitarian data
+collection.
+
+
+Q: How do I install OSM Fieldwork ?
+A: To install osm-fieldwork, you can use pip. Here are two options:
+
+
+Directly from the main branch:
+ pip install git+https://github.com/hotosm/osm-fieldwork.git
+-OR-
+
+
+Latest on PyPi:
+ pip install Osm-Fieldwork
+
+
+
+
+Q: Where can I find the source code and the XLSForm library ?
+A: Check the osm-fieldwork git repo here
+
+
+Q: What language is Osm Fieldwork written in ?
+A: OSM Fieldwork is written in Python and uses other modules like
+shapely ,
+pyxform ,
+xmltodict ,
+pandas
+
+
+Q: What is the XLSForm library ?
+A: The library of XLSForms are primarily focused on humanitarian
+data collection, and follow data models designed by the Humanitarian
+Openstreetmap Team with consultation with
+other humanitarian
+NGOs . These
+are designed for efficient data collection and conversion to OSM XML
+format to allow for easy and high quality contributions to the map.
+
+
+Q: Who can contribute to osm-fieldwork?
+A: It is an open-source project, and contributions from developers
+and technical writers are always welcome.
+
+
+Q: What kind of contributions can I make ?
+A: There are several ways you can contribute to osm-fieldwork, including:
+
+Development: If you have experience in development, you can contribute
+ by fixing bugs, adding new features, or improving the existing codebase.
+
+
+Documentation: If you have experience in technical writing, you can
+ contribute by writing documentation, tutorials, or other educational
+ materials.
+
+
+Testing: If you have experience in software testing, you can
+ contribute by testing the application and reporting bugs or suggesting
+ improvements.
+
+
+
+Q: How can I report a bug or suggest a new feature for OSM
+Fieldwork ?
+A: You can report bugs or suggest new features by opening an issue
+on the OSM Fieldwork
+repository on
+GitHub. Be sure to provide as much detail as possible, including
+steps to reproduce the bug and any relevant error messages.
+For more details visit Contributions Page .
+
+
+Q: Do I need to have prior experience with XLSForms or python to
+contribute to OSM Fieldwork ?
+A: While prior experience with the various data formats usd by OSM
+Fieldwork is helpful, it is not required to contribute to OSM
+Fieldwork. You can start by reviewing the documentation, exploring
+the codebase, and contributing to issues labeled as good first issue .
+
+
+Q: How can I get help or support for OSM Fieldwork ?
+A: If you need help or support with XLSForms, you can reach out to the
+ODK community on the ODK Forum . For
+questions on OSM Fieldwork you can open an issue on the OSM
+Fieldwork repository.
+
+
+Q: What are the benefits of contributing to OSM Fieldwork?
+A: Contributing to OSM Fieldwork allows you to help improve a widely
+used tools for data collection.
+
+
+Q: What is the license for OSM Fieldwork ?
+A: OSM Fieldwork is
+AGPLv3 ,
+because it encourages us to all work together. The XLSForms themselves
+are under the CC 4.0
+
+
+Q: How can I test my changes to OSM Fieldwork ?
+A: OSM Fieldwork has a suite of automated tests that you can run to
+ensure that your changes do not introduce new bugs or break existing
+functionality. You can run the tests locally on your computer using
+the command-line interface or by setting up a continuous integration
+environment on a platform like Travis CI.
+
+
+Q: Facing addititional problems?
+A: Visit Troubleshooting .
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/Troubleshooting/index.html b/about/Troubleshooting/index.html
new file mode 100644
index 000000000..78ee7cfba
--- /dev/null
+++ b/about/Troubleshooting/index.html
@@ -0,0 +1,1344 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Troubleshooting - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Troubleshooting
+Unable to connect to the ODKCentral server over http (i.e. insecure)
+By default, ODKCentral API connections are verified with SSL
+certificates. However, sometimes, users may encounter issues
+connecting to ODK Central
+with self-signed certificates. This is common for developers if
+running ODK Central in a local subnet without a public domain name.
+Here are some steps to troubleshoot and resolve the issue:
+
+Add the certificate to your system trusted certificate store.
+
+If you are using a self-signed certificate, make sure to add it to
+your system's trusted certificate store. For Ubuntu/Debian users,
+follow the steps below:
+In a terminal:
+sudo apt update && sudo apt install ca-certificates
+sudo cp cert.crt /usr/local/share/ca-certificates/
+sudo update-ca-certificates
+
+If running OSM Fieldwork within the
+Field Mapping Tasking Manager ,
+this is handled for you.
+Q: Can I disable SSL verification (not recommended)
+A: If you have tried the above step and still cannot connect to
+ODK Central, you can disable SSL verification for the
+certificate. However, this is not recommended as it will connect
+to ODK Central insecurely. To do this, add the environment
+variable ODK_CENTRAL_SECURE=False to your system.
+Additional Troubleshooting Steps
+If you are still unable to connect to the ODKCentral server over HTTP:
+
+
+Verify that the ODK Central API URL is correct
+Make sure that you have entered the correct ODK Central API URL in
+your OSM Fieldwork configuration file. You can check the URL by
+logging into ODK Central and navigating to the "Site
+Configuration" page.
+
+
+
+
+Check that the ODK Central server is running
+Make sure that the ODK Central server is running and
+accessible. You can check the server status by navigating to the
+ODK Central API URL in your web browser.
+
+
+
+
+Check that the ODK Central server is reachable from your network
+Make sure that your network is not blocking the connection to the
+ODK Central server. You can try pinging the server from your
+computer to see if there is a network issue.
+
+
+
+
+Check that your firewall is not blocking the connection
+Make sure that your firewall is not blocking the connection to the
+ODK Central server. You can try temporarily disabling your
+firewall to see if this resolves the issue.
+
+
+
+
+
+Try using a different web browser
+If you are having trouble connecting to ODK Central through a web
+browser, try using a different browser to see if the issue
+persists. It is possible that the issue is related to the browser
+or its settings.
+
+
+
+
+Update OSM Fieldwork and ODK Central to the latest version
+Make sure that you are using the latest version of OSM Fieldwork
+and ODK Central. Check the OSM Fieldwork and ODK Central release
+notes to see if any updates address the issue you are
+experiencing.
+
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/basemapper/index.html b/about/basemapper/index.html
new file mode 100644
index 000000000..6b0b3f15f
--- /dev/null
+++ b/about/basemapper/index.html
@@ -0,0 +1,1306 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Basemapper.py - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Basemapper.py
+Basemapper is a program that creates basemaps for mobile apps in the
+mbtiles and sqlitedb formats. These formats are commonly used by
+mobile mapping applications like Osmand and
+ODK Collect . There are two
+primary formats:
+
+mbtiles, supported by many apps.
+sqlitedb, supported only by Osmand
+
+Both of these use formats use underlying
+sqlite3 , with similar database
+schemas. The schema are a simple XYZ that stores a png or jpeg
+image. When the entire planet is chopped into squares, there is a
+relation between which map tile contains the GPS coordinates you
+want. Small zoom levels cover a large area, higher zoom levels a
+smaller area.
+Basemapper does not store anything in memory, all processing
+is done as a stream so large areas can be downloaded. Time to go buy a
+really large hard drive. You can also use this map tile cache for
+any program that supports a TMS data source like
+JOSM . Luckily once downloaded,
+you don't have to update the map tile cache very often, but it's also
+easy to do so when you need to. When I expect to be working offline, I
+commonly download a larger area, and then in the field produce the
+smaller files.
+Basemapper. downloads map tiles to a cache and uses them to generate the
+output files. It does not perform data conversion. The resulting
+output can be used for visualizing geographic data and analyzing
+survey responses in a spatial context. The script provides various
+command-line options for customizing the output, such as setting the
+zoom levels, boundary, tile cache, output file name, and more.
+Database Schemas
+Mbtiles are used by multiple mobile apps, but our usage is primarly
+for ODK Collect. Imagery basemaps are very useful for two
+reasons. One, the map data may be lacking, so the imagery helps one to
+naviagte. For ODK Collect the other advantage is you can select the
+location based on where the building is, instead of were you are
+standing. Mbtiles are pretty straight forward.
+The sqlitedb schema used by Osmand looks the same at first, but has
+one big difference. In this schema it tops out at zoom level 16, so
+instead of incrementing, it decrements the zoom level. This obscure
+detail took me a while to figure out, it isn't documented anywhere.
+mbtiles
+CREATE TABLE tiles (zoom_level integer, tile_column integer, tile_row integer, tile_data blob);
+CREATE INDEX tiles_idx on tiles (zoom_level, tile_column, tile_row);
+CREATE TABLE metadata (name text, value text);
+CREATE UNIQUE INDEX metadata_idx ON metadata (name);
+
+sqlitedb
+CREATE TABLE tiles (x int, y int, z int, s int, image blob, PRIMARY KEY (x,y,z,s));
+CREATE INDEX IND on tiles (x,y,z,s);
+CREATE TABLE info (maxzoom Int, minzoom Int);
+CREATE TABLE android_metadata (en_US);
+
+Usage
+The basemapper.py script is run from the command line when
+running standalone, or the class can be imported into python
+programs. The Field Mapping Tasking
+Manager uses this as part of a
+(FastAPI])https://fastapi.tiangolo.com/ ) backend for the website.
+The first time you run basemapper.py, it'll start downloading map
+tiles, which may take a long time. Often the upstream source is
+slow. It is not unusual for downloading tiles, especially at higher
+zoom levels may tak an entire day. Once tiles are download, producing
+the outout tiles is quick as then it's just packaging. In areas where
+I work frequentely, I usually download a large area even if it takes a
+week or more so it's available when I need it. On my laptop I actually
+have a map tile cache for the entire state of Colorado, as well as
+many large areas of Nepal, Turkey, Kenya, Uganda, and Tanzania.
+Options
+The basic syntax is as follows:
+
+-h, --help show this help message and exit
+-v, --verbose verbose output
+-b BOUNDARY, --boundary BOUNDARY - The boundary for the area you want, as BBOX string or geojson file.
+-z ZOOMS, --zooms ZOOMS - The Zoom levels
+-o OUTFILE, --outfile - OUTFILE Output file name
+-d OUTDIR, --outdir OUTDIR -Output directory name for tile cache
+-s {ersi,bing,topo,google,oam}, --source {ersi,bing,topo,google,oam} - Imagery source
+
+The suffix of the output file is either mbtiles or sqlitedb , which is
+used to select the output format. The boundary file, if specified, must be in
+GeoJson format.
+If in BBOX string format, it must be comma separated:
+"minX,minY,maxX,maxY".
+Imagery Sources
+
+ESRI - Environmental Systems Research Institute
+Bing - Microsoft Bing imagery
+Topo - USGS topographical maps (US only)
+OAM - OpenAerialMap
+
+The default output directory is /var/www/html . The actual
+subdirectory is the source name with tiles appended, so for
+example /var/www/html/oamtiles . Putting the map tiles into webroot
+lets JOSM or QGIS use them when working offline.
+Examples
+Example 1:
+Generate a basemap for Osmand using
+ERSI imagery, for an area
+specified by a geojson bounding box, and supporting zoom levels 12
+through 19.
+[path]/basemapper.py -z 12-19 -b test.geojson -o test.sqlitedb -s esri
+
+Example 2:
+As above, but mbtiles format, and Bing imagery source. The -v
option
+enables verbose output, which will show more details about the
+download and processing progress. Also only download a single zoon
+level.
+[path]/basemapper.py -v -z 16 -b test.geojson -o test.mbtiles -s bing
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/configuring/index.html b/about/configuring/index.html
new file mode 100644
index 000000000..33e977abc
--- /dev/null
+++ b/about/configuring/index.html
@@ -0,0 +1,1338 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ The Config File - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Configuring the Data Conversion
+Osm_Fieldwork uses a YAML-based configuration file that controls the
+conversion process. While ideally, the tags in the XForm are a match
+for OSM tags, some survey questions generate very different primary
+tags. All of the strings in this file are lowercase, as when
+processing the CSV file, everything is forced to be lowercase.
+YAML is a simple syntax, and most of the config options are simply
+lists. For example:
+# All of the data that goes in a different non-OSM file
+private:
+ - income
+ - age
+ - gender
+
+There are 3 sections in the config file, ignore , convert , and
+private . Anything in the ignore section gets left out of all data
+processing and output files. Anything in the private section is kept
+out of the OXM output file, but included in a separate GeoJson
+formatted file. That file contains all the data from whoever is
+organizing this mapping campaign. There are often data items like
+gender that don't belong in OSM, but that information is useful
+to the organizers. Anything in the convert section is the real
+control of the conversion process.
+Here is an example of a configuration file with explanations of
+its different sections and options expained in detail:
+#ignore section
+ignore:
+ - respondent_name
+ - survey_date
+
+#private section
+private:
+ - age
+ - gender
+
+#convert section
+convert:
+ #example of a simple conversion
+ - waterpoint:
+ - well: man_made=water_well
+ - natural: natural=water
+ #example of a conversion with multiple OSM tags
+ - power:
+ - solar: generator::source=solar,power=generator
+ - wind: generator::source=wind,power=generator
+ - hydro: generator::source=hydro,power=generator
+ - geothermal: generator::source=geothermal,power=generator
+ - grid: generator::source=electricity_network,power=generator
+
+The configuration file has three sections: ignore
, private
, and convert
.
+The ignore
section lists the names of data fields that should be
+ignored during the conversion process. These fields will not be
+included in any output files.
+The private
section lists the names of data fields that are
+considered private and should not be included in the OSM output
+file. However, they will be included in a separate GeoJson formatted
+file. This file contains all the data from whoever is organizing the
+mapping campaign. An example of private data is gender, which is
+useful to the organizers but not relevant to OSM.
+The convert
section is the real control of the conversion
+process. It lists the survey questions and their corresponding OSM
+tags and values. In this section, each survey question is represented
+by a tag name, and each answer to the survey question is represented
+by a value. If the answer matches the value, it returns both the tag
+and the value for OSM. An equal sign is used to delimit them.
+For example, in the configuration file above, the survey question
+about waterpoints has two possible answers: "well" and "natural". If
+the answer is "well", the corresponding OSM tag and value is
+"man_made=water_well". If the answer is "natural", the corresponding
+OSM tag and value is "natural=water".
+Another example in the same configuration file is the survey question
+about power sources. This survey question has five possible answers:
+"solar", "wind", "hydro", "geothermal", and "grid". Each answer
+corresponds to multiple OSM tags and values, which are separated by
+commas.
+For example, if the answer is "solar", the corresponding OSM tags and
+values are "generator::source=solar" and "power=generator". The double
+colon is used to represent a hierarchy in the OSM tags. In this
+example, the generator source is solar, and the power source is a
+generator.
+Both ODK and OSM use a tag/value pair. In OSM, the tags and values
+are documented, and the mapping community prefers people use the
+commonly accepted values. In ODK, the tags and values can be anything
+the developer of the XLSForm chooses. Depending on the answer to the
+survey question, that may be converted to a variety of OSM tags and
+values.
+For this example, the value used in the name column of the XLSForm
+survey sheet is waterpoint . It has several values listed
+underneath. Each of those is for the answer given to the waterpoint
+survey question. If the answer matches the value, it returns both the
+tag and the value for OSM. An equal sign is used to deliminate them.
+- waterpoint:
+ - well: man_made=water_well
+ - natural: natural=water
+
+Some features have multiple OSM tags for a single survey question
+answer. To handle this case, all entries are deliminated by a comma.
+- power:
+ - solar: generator::source=solar,power=generator
+ - wind: generator::source=wind,power=generator
+ - hydro: generator::source=hydro,power=generator
+ - geothermal: generator::source=geothermal,power=generator
+ - grid: generator::source=electricity_network,power=generator
+
+Overall, the configuration file is a powerful tool for customizing the
+conversion of ODK data into OSM tags and values. By carefully defining
+the ignore
, private
, and convert
sections, you can control the
+output of the conversion process and ensure that it meets your needs.
+Here's a simple chart of the conversion Data Flow .
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/conflation.odg b/about/conflation.odg
new file mode 100644
index 000000000..aa7eff5c0
Binary files /dev/null and b/about/conflation.odg differ
diff --git a/about/conflation.svg b/about/conflation.svg
new file mode 100644
index 000000000..b53c92ec1
--- /dev/null
+++ b/about/conflation.svg
@@ -0,0 +1,222 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Conflation Workflow
+
+
+
+
+
+
+
+ Download submission
+
+
+
+
+
+
+
+ Write to OSM XML file
+
+
+
+
+
+
+
+ Read line from file
+
+
+
+
+
+
+
+ Convert to OSM XML
+
+
+
+
+
+
+
+ Is a tag match in range ?
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Query database
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Add fixme tag
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Load into JOSM
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/conflation/index.html b/about/conflation/index.html
new file mode 100644
index 000000000..aefc9db6b
--- /dev/null
+++ b/about/conflation/index.html
@@ -0,0 +1,1482 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Data conflation - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Conflating With OpenStreetMap
+Now that the data collected using ODK
+Collect has been converted to
+OSM XML, it needs to be conflated against the existing
+OpenStreetMap(OSM) data before validation by a human being. Due to
+the wonderful flexibility of the OpenStreetMap(OSM) data schema, this
+can be difficult to fully automate. At best it can assist the human
+mapper by identifying probable duplicates, and other conflation
+issues. Rather than delete the possible duplicates, instead a tag is
+added so the mapper can find them easily and decide.
+Conflation algorythms are not very elegant, they are usually slow and
+brute force. But they also save the mapper time doing this completely
+manually.
+This project's conflation software can use either a local postgres
+database, or a GeoJson file produced by the
+make_data_extracts program.
+This program is also used by
+FMTM , so you can use those as
+well. Obviously using postgres locally is much faster, especially for
+large areas.
+Setting Up Postgres
+For raw OSM data, the existing country data is downloaded from GeoFabrik , and imported using a
+modified schema for osm2pgsql.
+osm2pgsql --create -d nepal --extra-attributes --output=flex --style raw.lua nepal-latest-internal.osm.pbf
+
+The raw.lua script is available
+here . It's
+part of the Underpass
+project . It uses a
+more compressed and efficient data schema designed for data analysis.
+Once the data is imported, do this to improve query performance.
+cluster ways_poly using ways_poly_geom_idx;
+create index on ways_poly using gin(tags);
+
+The existing OSM database schema stores some tags in columns, and
+other tags in a hstore column. Much of this is historical. But this
+also makes it very complicated to query the database, as you need to
+know what is a column, and what is in the hstore column. The raw.lua
+schema is much more compact, as everything is in a single column.
+Using Postgres
+If you use the
+OdkParsers
+program, you don't have to deal with accessing the database directly,
+but here's how if you want to.
+This would find all of the tags for a hotel:
+SELECT osm_id, tags FROM nodes WHERE tags->>'amenity'='hotel'
+
+If you want to get more fancy, you can also use the geometry in the
+query. From python we setup a few values for the query, and note the
+::geometry suffix, which uses meters instead of units. Meters are
+easier to work with than units of the planet's circumferance.
+self.tolerance = 2
+wkt.wkt = "Point", "coordinates": [-107.911957, 40.037573]
+value = 'Meeker Hotel'
+query = f"SELECT osm_id,tags,version,ST_AsText(ST_Centroid(geom)) FROM ways_POLY WHERE ST_Distance(geom::geography, ST_GeogFromText(\'SRID=4326;{wkt.wkt}\')) < {self.tolerance} AND levenshtein(tags->>'name', {value}) <= 1"
+This query finds any building polygon with 2 meters where the name
+matches. The levenstein function does a fuzzy string match, since
+minor differences in the name can still be a match. Minor typos in the
+ODK collected data or OSM often have minor typos.
+Using a GeoJson File
+Using a data file also works the same way, only you can't really query
+the data file the same way. Instead the entire data file is loaded
+into a data structure so it can be queried by looping through all the
+data. While not very efficient, it works well.
+Conflating The Data
+All data collected using Collect is a node, but we also want to check
+both nodes and ways. Many amenities in OSM are only a node, since
+adding data with a mobile app, POIs is all they support. Any data
+added by JOSM or the
+iD editors is often a
+polygon. Many buildings have been added to OSM using the HOT Tasking
+Manager , and were traced from satellite
+imagery.
+Buildings traced from imagery have only a single tag, which is
+building=yes . When field mapping, we now know that building is a
+resturant, a medical clinic, or a residence. Since OSM guidelines
+prefer the tags fo on the building polygon, and not be a separate POI
+within the building. If there are multiple businesses in the same
+building polygon, then they stay as a POI in the building.
+Conflating With Postgres
+Since the database has 2 tables, one for nodes and the other for
+polygons, we have to query both. A possible duplicate is one that is
+within the desired distance and has a match in one of the tags and
+values. Names are fuzzy matched to handle minor spelling differences.
+The nodes table is queried first. If no possible duplicates are
+found, then the ways table is queried next. The query just looks
+for any nearby POI that has a match between any of the tags. Currently
+the distance is set to 2 meters. Often the GPS coordinates from
+Collect are where you are standing, usualy in front of the building.
+This distance threshold is configurable, but if it's too large, you
+get many false positives. As all mobile mapping apps only add a POI
+for an amenity, it's common it's in the nodes table.
+If nothing is found in the nodes table, then we check the polygons the
+same way, distance and a tag match. Often people working on a desktop
+or laptop may add more tags to an existing feature, and properly have
+all the tags be in the building way, and not a POI within the
+building. If there are miltiple small businesses in the same building,
+then each remains a POI within the building polygon.
+If a possible duplicate is found, the tags from the collected data and
+the tags from OSM are merged together. In the case of the name tag,
+the existing name is converted to an old_name tags, and the
+collected name value is used for the *name tag.
+Conflating with a GeoJson File
+Since GeoJson supports multiple geometry types, unlike postgres, there
+is only one set of data to compare against. The same process as used
+for postgres is used for the data file, the only difference being the
+data file is loaded into a data structure, and then has to loop
+through all the existing features. This is slower than using postgres,
+but works the same. One advantage is this can use he data extract from
+FMTM, and not require the mapper to have a postgres database.
+String Matching
+There are more spelling mistakes, weird capitalization, embedded
+quotes, etc... in the values for the name tag than I can count. This
+makes matching on the name somewhat complicated even when using fuzzy
+string matching. Typing in names on one's smartphone also can add
+typos or do auto-correction. And of course those mistakes may also
+already be in OSM, and the feature you collected may be the correct
+one.
+For a potential match, the old value is placed in a old_name tag, in
+addition to the the fixme tag used to flag a possible
+duplicate. This enables the validator to decide and fix any minor
+differences in the value. This mostly only applies to the name
+tag, as most other tags have a more formalized value.
+When an amenity has changed names, for example when a restaurant gets
+new owners, this won't likely be caught as a duplicate unless the
+amenity tag values match.
+Validating The Results
+Conflation does not generate perfect results. It's necessary to have a
+validator go through the reults and decide. The output file from
+conflation does not remove anything from the collected data. Instead it
+adds custom tags on what it finds. This way the validator can search
+for those tags when getting started, and delete the duplicate and
+validate the tag merging.
+The primary tag added is a fixme tag for possible duplicates. If
+there is more than a difference in the string values used for the
+name tag, the existing tag is renamed to be old_name . While
+this is not an actual OSM tag, the alt_name tag is currrently used
+to avoid conflicts. It's up to the validator to decide what the
+apppropriate value is.
+I often notice when collecting data in the field on my smartphone,
+typos are common. Missing capitalization on names or sometimes the
+wrong character is common.
+Here's a simple chart of the conversion Data Flow .
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/conversion.odg b/about/conversion.odg
new file mode 100644
index 000000000..4db38cc67
Binary files /dev/null and b/about/conversion.odg differ
diff --git a/about/conversion.svg b/about/conversion.svg
new file mode 100644
index 000000000..0b666e9e5
--- /dev/null
+++ b/about/conversion.svg
@@ -0,0 +1,304 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Conversion Workflow
+
+
+
+
+
+
+
+ Download submission
+
+
+
+
+
+
+
+ in convert section
+
+
+
+
+
+
+
+ In ignore section
+
+
+
+
+
+
+
+ in private section
+
+
+
+
+
+
+
+ Write to GeoJson file
+
+
+
+
+
+
+
+ Write to OSM XML file
+
+
+
+
+
+
+
+ Read line from file
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ yes
+
+
+
+
+
+ no
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ If tag matches
+
+
+
+
+
+ yes
+
+
+
+
+
+ no
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Get tag values
+
+
+
+
+
+
+
+
+
+
+
+ yes
+
+
+
+
+
+
+
+
+
+
+
+ no
+
+
+
+
+
+ yes
+
+
+
+
+
+
+
+ Convert tag and value
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/convert/index.html b/about/convert/index.html
new file mode 100644
index 000000000..6bd4c6732
--- /dev/null
+++ b/about/convert/index.html
@@ -0,0 +1,1265 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Convert.py - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Convert.py
+The convert.py module is part of the osm_fieldwork package and
+provides functionality for converting ODK forms to OSM XML using a
+YAML configuration file.
+Even if an XLSForm is carefully designed to have a one to one match
+between ODK and OSM , this is not
+always possible, as not all survey questions are for OSM.
+The Config File Sections
+There are several sections the config file. The default one is called
+xforms.yaml , and is included in the sources and the python
+package. It is possible to use a different config file.
+convert
+This section supports one to one conversion of tags, as well as one to
+many. This example shows all poossible conversion types. The simple
+ones like altitude just change the tag, and the value is used
+unchanged. A more complicated conversion is changing the value in
+addition to the tag. Anything with an equals sign is split into the
+appropriate tag and value for OSM. The final one is where a singe
+survey question creates multiple tahg and value pairs, deliminated by
+a comma. Each of the pairs is handled as a separate tag and value in
+OSM.
+convert: - latitude: lat - longitude: lon - altitude: ele - cemetery_services: - cemetery: amenity=grave_yard - cremation: amenity=crematorium - amenity: - coffee: amenity=cafe,cuisine=coffee_shop
+...
+private
+Not all collected data is suitable for OSM. This may include data that
+has no equivalant tag in OSM, or personal data.
+private:
+ - income
+ - age
+ - gender
+ - education
+
+ignore
+ODK supports many tags useful only internally. These go into the
+ignore section of the config file. Any tag in this section gets
+removed from from all output files. An example would be this:
+ignore:
+ - attachmentsexpected
+ - attachmentspresent
+ - reviewstate
+ - edits
+ - gps_type
+ - accuracy
+ - deviceid
+...
+
+multiple
+Not all survey questions have a single answer. Anything using
+select_multiple may have more than one value. As the default
+assumes one answer per question, this specifies the questions with
+multiple answers since they have to be processed seperately. The
+normal conversion process is applied to these too.
+multiple:
+ - healthcare
+ - amenity_type
+ - specialty
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/externaldata/index.html b/about/externaldata/index.html
new file mode 100644
index 000000000..0140717c2
--- /dev/null
+++ b/about/externaldata/index.html
@@ -0,0 +1,1633 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ External Data - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Dealing with External Data in ODK
+External Datasets
+ODK Collect has recently gained the ability
+to load an external data file in GeoJson format of existing data. It's
+now possible to select existing data and then import its values into
+the XForm as default values. This lets the mapper use the XForm to
+change the existing data, or add to it. Any changes will need to be
+conflated later, that'll be another document.
+Why do I want to use ODK Collect to edit map data? Much of what is
+currently in OpenStreetMap is lacking
+metadata, or the data has changed, like when a restaurant changes
+owners and gets a new name. Also most all remote mapping using
+satellite imagery lacks tags beyond building=yes . When we are doing
+a ground data collection project, we want to add useful tags like the
+building material, or whether it's a cafe or a hospital.
+Old imports also bring in problems, for example the infamous TIGER
+import . Mappers have been
+cleaning that mess in North America up for over a decade. But an old
+import may have a weird value for an OSM tag, and it's usually better
+to update to a more community approved data model. The beauty and the
+curse of OSM data is its wonderful flexibility. People do invent new
+tags for a specific mapping campaign or import that doesn't get
+reviewed. Sometimes values have embedded quote marks or weird
+capitalization, and other strange formatting worth correcting.
+Creating the GeoJson file
+When working with OSM data, there are multiple sources to obtain the
+required data. One option is to download a daily database dump from
+GeoFabrik , which can be
+used as a flat file or imported into a database. The Humanitarian
+OpenStreetMap Team(HOT) maintains two
+projects that can also be used for data extracts. The primary one has
+a web based user interface, and is called the HOT Export
+Tool . The other option runs in a
+terminal and is part of OSM
+Fieldwork project, and
+is also used as part of the Field Mapping Tasking
+Manager backend. That program is
+available
+here . Alternatively,
+Overpass Turbo can also be used to query
+the data but you have to understand the Overpass query syntax.
+It's important to keep in mind that there is a translation between
+the column names obtained from querying the data and how ODK Collect
+views it. There is also a translation from ODK to OSM, and it's
+important to ensure that all translations work together seamlessly for
+a smooth data flow. To maintain clarity, it's best to keep all tags
+and values as similar as possible, with unique names. When using
+ogr2ogr for data extraction
+from a Postgres database, there is more control than when using
+Overpass, and larger datasets can be processed. You can clean up all
+the tag names later if you add a custom config file for the
+conversion.
+As the GeoJson file gets turned into an XPATH components when
+converted to an XForm, the actual filename without the suffix becomes
+a node in the XPATH, so you can't have a survey question using the
+same name as the filename. It is prefered to have the name using the
+actual OSM tag instead of the file. If you get this error, you need to
+rename the GeoJson file.
+Duplicate type: 'choice', Duplicate URI: 'None', Duplicate context: 'survey'.
+
+Naming Conflicts
+If you do want to use an OSM tag name in a calculate field, a
+technique for maintaining consistency is to prefix an x to the start
+of each column name, so healthcare becomes xhealthcare . Then, in
+the XLSForm, healthcare can be used for the instance, and
+xhealthcare can be used for the value in the calculation column in
+the survey sheet. The name column in the survey sheet can then be just
+healthcare , which will translate directly into its OSM tag
+equivalent. For this example note that the GeoJson file must not be
+named healthcare.geojson, because it'll conflict with
+_healthcare . You can also avoid this by having the calculation in
+the same row as the survey question and avoiding the variable. If you
+do that, add a trigger for the geojson file, and it'll populate the
+default value for the question.
+It's possible to support almost any value using a text type in the
+XLSForm, but it's better to have an approved value for tag validation
+and completeness. If using a data model, the list of choices for a tag
+is defined, and anything outside of that will cause an
+error. Therefore, it's important to adhere to validated data models to
+avoid introducing errors or inconsistencies into the dataset. If the
+SQL query returns columns that aren't compatible with the XLSForm,
+XPATH errors will occur in ODK Collect.
+Something else to consider is the frequency of the tags and
+values. Since almost anything can be added to OSM, there are a lot of
+obscure ones. It's strongly suggested to use more common tags and
+values. A resource for this is the
+Taginfo website, which lists
+everything every used in OSM. There are two columns of interest, one
+is whether the tag is on the OSM
+wiki , and the other
+is how many times that tag is used. I try to use tags that are on the
+wiki whenever possible, or at least have high frequency counts.
+Data filtering
+For the external file to load properly in ODK Collect, any tags and
+values in the data extract must be in the choices sheet. Otherwise ODK
+Collect will fail to launch. The OSM
+Fieldwork project has a
+utility
+program
+which can be imported into other python programs that scans the
+XLSForm choices sheet, and removes anything in the data extract that
+isn't supported as a choice.
+Debugging select_from file with GeoJson
+Debugging complex interactions between the XLSForm, external data
+files, and ODK Collect can be a challenging task. Here are a few
+tricks to help debug what is going on. I strongly suggest developing
+your XLSForm initially without the data extracts. That way you can use
+Enketo , which you can access using the
+Preview button in ODK Central. Get all the survey questions,
+grouping, conditional, etc... so it's easy to test with
+Enketo. Enketo doesn't work with the GeoJson based data extract. Then
+add the data extract, and the calculation column entries to use the
+OSM data to set the survey question default value.
+Disable the map appearance
+When working with external data, the map value in the appearance
+column of the survey sheet is often used. However, this can slow down
+the debugging process. To make it more efficient, you can turn off the
+map values and use the select menu instead. That works especially well
+if you have a small data file for testing, because then it's easy to
+cycle between them.
+To use the placement map, here's an example.
+
+
+
+type
+name
+label
+appearance
+
+
+
+
+select_one_from_file camp_sites.geojson
+existing
+Existing Campsites
+map
+
+
+
+And an example where the values in the data file are an inline select
+menu instead.
+
+
+
+type
+name
+label
+appearance
+
+
+
+
+select_one_from_file camp_sites.geojson
+existing
+Existing Campsites
+minimal
+
+
+
+Display calculated values
+Often the bug you are trying to find is obscure, and you may not see
+any of the data file values being propagated into ODK Collect, even if
+it was working previously. In such cases, you can add a text survey
+question to display any of the values. Here's an example:
+
+
+
+type
+name
+label
+calculation
+trigger
+
+
+
+
+calculate
+xid
+OSM ID
+instance(“camp_sites”)/root/item[id=${existing}]/id
+
+
+
+calculate
+xlabel
+Get the label
+instance(“camp_sites”)/root/item[id=${existing}]/title
+
+
+
+calculate
+xref
+Reference number
+instance(“camp_sites”)/root/item[id=${existing}]/ref
+
+
+
+calculate
+xlocation
+Location
+instance(“camp_sites”)/root/item[id=${existing}]/geometry
+
+
+
+calculate
+xtourism
+camping type
+instance(“camp_sites”)/root/item[id=${existing}]/tourism
+
+
+
+calculate
+xleisure
+leisure type
+instance(“camp_sites”)/root/item[id=${existing}]/leisure
+
+
+
+text
+debug1
+Leisure
+${xleisure}
+${existing}
+
+
+text
+debug2
+OSM ID
+${xid }
+${existing}
+
+
+text
+debug3
+Ref number
+${xref }
+${existing}
+
+
+text
+debug4
+Tourism
+${xtourism }
+${existing}
+
+
+
+
+
+
+
+
+
+text
+name
+Business Name
+${xlabel }
+${existing}
+
+
+
+For a value that is only used once to set the default value in
+Collect, you can also reference it in the same row. This saves
+potential naming conflicts, although is why I use an x prefix for
+gobal values.
+
+
+
+type
+name
+label
+calculation
+trigger
+
+
+
+
+text
+name
+Business Name
+instance(“camp_sites”)/root/item[id=${existing}]/name
+${existing}
+
+
+
+Error Dialog
+Assuming xls2xform is happy, sometimes you get this error message in
+ODK Collect when switching screens. You'll see this when you have a
+value in your data file for a select_one survey question, but that
+value is not in the list of values in the choices sheet for that tag. In
+this example, there is no doctor value in the healthcare
+selection in the choices sheet. If you use the data filtering utility
+program mentioned above, you'll never see this error.
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/make_data_extract/index.html b/about/make_data_extract/index.html
new file mode 100644
index 000000000..c6f8d97ac
--- /dev/null
+++ b/about/make_data_extract/index.html
@@ -0,0 +1,1432 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ make_data_extract - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ make_data_extract
+
+
+The make_data_extract.py
program is used to extract OpenStreetMap
+(OSM) data for use with the select_one_from_file
function in ODK
+Collect. This function allows users to select from a list of options
+generated from an external file. The make_data_extract.py
program
+creates a data extract that can be used as an external file with the
+XLSForm. The data extract can be created using local Postgres
+database, or the remote Underpass database.
+To use the new select_one_from_file
for editing existing OSM data you
+need to produce a data extract from OSM. This can be done several
+ways, but needed to be automated to be used for FMTM.
+options:
+ --help (-h) show this help message and exit
+ --verbose (-v) verbose output
+ --geojson (-g) GEOJSON Name of the GeoJson output file
+ --boundary (-b) BOUNDARY Boundary polygon to limit the data size
+ --category (-c) CATEGORY Which category to extract
+ --uri (-u) URI Database URI
+ --xlsfile (-x) XLSFILE An XLSForm in the library
+ --list (-l) List List all XLSForms in the library
+
+Examples
+Make*data*extract uses a Postgres database to extract OSM data. By
+default, the program uses localhost as the database host. If you
+use *underpass as the data base name, this will remotely access the
+Humanitarian OpenStreetMap Team(HOT)
+maintained OSM database that covers the entire planet, and is updated
+every minute. The name of the database can be specified using the
+*--uri** option. The program extracts the buildings category of OSM
+data by default. The size of the extracted data can be limited using
+the _--boundary* option. The program outputs the data in GeoJSON
+format.
+For raw OSM data, the existing country data is downloaded from GeoFabrik , and imported using a
+modified schema for osm2pgsql. First create the database and install
+two postgres extensions:
+# createdb nigeria
+psql -d nigeria -c "CREATE EXTENSION postgis"
+psql -d nigeria -c "CREATE EXTENSION hstore"
+
+And then import the OSM data.
+
+osm2pgsql --create -d nigeria --extra-attributes --output=flex --style raw.lua nigeria-latest-internal.osm.pbf
+
+The raw.lua script is available
+here . It's
+part of the Underpass
+project . It uses a
+more compressed and efficient data schema.
+Example
+./make_data_extract.py -u colorado --boundary mycounty.geojson -g mycounty_buildings.geojson
+
+This example extracts the buildings
category of OSM data from a
+Postgres database named colorado
. The program limits the size of the
+extracted data to the boundary specified in the mycounty.geojson
+file. The program outputs the data in GeoJSON format to a file named
+mycounty_buildings.geojson
.
+Boundary
+The --boundary
option can be used to specify a polygon boundary to
+limit the size of the extracted data. The boundary has to be in
+GeoJSON format, both multipolygons and polygons are supported.
+Example:
+./make_data_extract.py -u foo@colorado --category healthcare --boundary mycounty.geojson -g mycounty_healthcare.geojson
+
+This example extracts the healthcare
category of OSM data from a
+Postgres database named colorado
with e user foo . The program
+limits the size of the extracted data to the boundary specified in the
+mycounty.geojson
file. The program outputs the data in GeoJSON
+format to a file named mycounty_healtcare.geojson
.
+Category
+The --category
option can be used to specify which category of OSM
+data to extract. The program supports any category in the xlsform
+library
+Example
+./make_data_extract.py -u underpass --boundary mycounty.geojson --category amenities -g mycounty_amenities.geojson
+
+This example uses Overpass Turbo to extract the amenities
category
+of OSM data within the boundary specified in the mycounty.geojson
+file. The program outputs the data in GeoJSON format to a file named
+mycounty_amenities.geojson
.
+
+The program outputs the extracted OSM data in GeoJSON format. The name
+of the output file can be specified using the --geojson option
. If
+the option is not specified, the program uses the input file name with
+_buildings.geojson
appended to it.
+./make_data_extract.py -u colorado --boundary mycounty.geojson -g mycounty_buildings.geojson
+
+
+ODK has 3 file formats. The primary one is the source file,
+which is in XLSX format, and follows the XLSXForm specification. This
+file is edited using a spreadsheet program, and convert using the
+xls2xform program. That conversion products an ODK XML file. That file
+is used by ODK Collect to create the input forms for the mobile
+app. When using ODK Collect, the output file is another XML format,
+unique to ODK Collect. These are the data collection instances.
+The ODK server, ODK Central supports the downloading of XForms to the
+mobile app, and also supports downloading the collected data. The only
+output format is CSV.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/odk-entities/index.html b/about/odk-entities/index.html
new file mode 100644
index 000000000..3c4a7ef1b
--- /dev/null
+++ b/about/odk-entities/index.html
@@ -0,0 +1,1348 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODK Entities - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ODK Entities
+Entities are a concept introduced into ODK around 2022. The basic goal
+of them is to allow updating of data using ODK, specifically by revisiting
+specific things (features, people, whatever) and adding new data attached
+to the same entity (data people refer to this as "longitudinal data"). For
+example, revisiting the same patient in a clinical study, such that the
+patient ID is constant and the new data from each visit is referenced to
+the same person. For field mapping, the use-case is obvious: visit a
+previously digitized building and add data from in-person observation or
+surveying the people in it.
+As of time of writing (March 2024), Entities in ODK are working and more
+or less implemented, but not yet in wide use or well debugged.
+Use of Entities with FMTM
+In osm-fieldwork / FMTM, we want to:
+
+Be aware of features that have already been mapped
+the ODK map views don't yet support styling features differently
+ based on attributes like "already_field_mapped", so we're actually
+ working on a navigation app within FMTM external to ODK, but we still
+ hope that ODK Collect may support styling in the future, which will
+ almost certainly be based on Entity attributes.
+
+
+Pre-fill questions in the form when features already have data attached
+ to them (for example, if a building already has a "name" tag, the field
+ mapper should see this pre-filled in the form, to be overwritted if wrong
+ but otherwise swiped past).
+
+The ODK core development team has strongly suggested that the FMTM team
+use the Entities to achieve the above goals.
+Workflow Using Entities
+
+UPDATE 29/07/2023 ODK Central now supports creating the Entity List
+/ Dataset via API instead of registration form.
+
+The basic workflow would probably resemble:
+
+Create an Entity registration form.
+In standard ODK settings, this simply means adding an entities
+ tab to the XLSForm (as per
+ the example Entities form created by the ODK team .
+ This seems to create what ODK refers to as a Dataset (in developer-facing
+ documenation only; they avoid this word in user-facing documentation).
+
+
+Upload the Entities via the API:
+There is currenltly no way to bulk upload Entities to a Dataset via the API.
+Instead we must upload each Entity individually, including a geometry field
+ in JavaRosa geometry format.
+Until time of writing, FMTM creates geography attachments in GeoJSON,
+ which does not work for Entities, which require a CSV file with a
+ geography
column (which must be in JavaRosa format, which no self-respecting
+ GIS utility can export).
+
+
+Create a form for the data collection, referencing the {dataset_name}.csv
for
+ the select_one_from_file
field.
+Dynamically insert the task_id
field on the choices
sheet, with name
and
+ label
set to the task id (for filtering later).
+Create a field task_id
in the survey, prior to the select_one_from_file
field.
+Set choice_filter
column to task_id=${task_id}, which links the selected task
+ ID in the survey with the task_id field in the Entities (used for filtering).
+Load the form via intent, with task_id
and entity_id
fields in query params.
+On form completion we can update the Entity label and fields.
+
+
+Resources
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/odk2csv/index.html b/about/odk2csv/index.html
new file mode 100644
index 000000000..f507300f1
--- /dev/null
+++ b/about/odk2csv/index.html
@@ -0,0 +1,1170 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ odk2csv - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+odk2csv
+Convert ODK XML instance file to CSV format
+usage: odk2csv [-h] [-v [VERBOSE]] -i INSTANCE
+
+options:
+ -h, --help show this help message and exit
+ -v [VERBOSE], --verbose [VERBOSE]
+ verbose output
+ -i INSTANCE, --instance INSTANCE
+ The instance file(s) from ODK Collect
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/odk2geojson/index.html b/about/odk2geojson/index.html
new file mode 100644
index 000000000..25339ae77
--- /dev/null
+++ b/about/odk2geojson/index.html
@@ -0,0 +1,1172 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ odk2geojson - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+odk2geojson
+Convert ODK XML instance file to GeoJson
+usage: odk2geojson [-h] [-v [VERBOSE]] -i INSTANCE [-o OUTFILE]
+
+options:
+ -h, --help show this help message and exit
+ -v [VERBOSE], --verbose [VERBOSE]
+ verbose output
+ -i INSTANCE, --instance INSTANCE
+ The instance file(s) from ODK Collect
+ -o OUTFILE, --outfile OUTFILE
+ The output file for JOSM
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/odk2osm/index.html b/about/odk2osm/index.html
new file mode 100644
index 000000000..7d0759cfb
--- /dev/null
+++ b/about/odk2osm/index.html
@@ -0,0 +1,1230 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ odk2osm - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+odk2osm
+This programs reads the various data formats used with
+OpenDataKit ,
+and converts them to an OSM XML file, and a GeoJson file. The two
+formats loaded from ODK
+Central are a CSV file
+downloaded from the submissions page, or a JSON file downloaded using
+ODATA
+In addition for working offline, this can also parse the ODK XML
+format used for the instances files in ODK
+Collect . When working in the field, I use
+adb to pull files off my
+smartphone using a USB cable.
+The options for this program are:
+Convert ODK XML instance file to OSM XML format
+options:
+ -h, --help show this help message and exit
+ -v [VERBOSE], --verbose [VERBOSE]
+ verbose output
+ -y YAML, --yaml YAML Alternate YAML file
+ -x XLSFILE, --xlsfile XLSFILE
+ Source XLSFile
+ -i INFILE, --infile INFILE
+ The input file
+By default, the
+xforms.yaml
+file is used when converting to OSM XML format. Using the --yaml
+option allows you to have a custom conversion of the data collected by
+your XLSForm . This only applies to the OSM
+XML output, when processing the GeoJson file, no conversion is done.
+The input file is the CSV or JSON file downloaded from ODK
+Central. ODK Collect stores the instance files in a collection of
+sub-directories that are timestamped and have a unique instance number
+as part of the file name. The primary part of the filename is the same
+as the title of the XLSForm.
+For example:
+instances/Buildings_3_2024-05-28_18-34-38/Buildings_3_2024-05-28_18-34-38.xml
+ instances/Buildings_2_2024-01-24_13-36-20/Buildings_2_2024-01-24_13-36-20.xml
+ instances/Buildings_3_2024-05-31_11-08-22/Buildings_3_2024-05-31_11-08-22.xml
+ instances/Buildings_2_2024-01-26_15-16-53/Buildings_2_2024-01-26_15-16-53.xml
+ instances/Buildings_2_2024-01-26_15-07-17/Buildings_2_2024-01-26_15-07-17.xml
+ instances/Buildings_3_2024-05-29_11-46-53/Buildings_3_2024-05-29_11-46-53.xml
+ instances/Buildings_3_2024-06-03_11-14-02/Buildings_3_2024-06-03_11-14-02.xml
+ instances/Buildings_3_2024-06-03_10-33-27/Buildings_3_2024-06-03_10-33-27.xml
+ instances/Buildings_2_2024-01-26_11-42-38/Buildings_2_2024-01-26_11-42-38.xml
+ instances/Buildings_3_2024-05-29_12-13-37/Buildings_3_2024-05-29_12-13-37.xml
+ ...
+In this case the parameter passed to odk2osm can contain a regular
+expression to process multiple files, as each time you open Collect,
+it creates a new directory and file. The output is a single file.
+So in this case, run odk2osm like this:
+odk2osm -v -i Buildings_3* -x Buildings.xls
+The --xlsfile is used to specigy the XLSForm that was used for this
+mapping session. This is used to supply the correct data type of each
+entry collected.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/odk_client/index.html b/about/odk_client/index.html
new file mode 100644
index 000000000..272afa70c
--- /dev/null
+++ b/about/odk_client/index.html
@@ -0,0 +1,1743 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODK Client - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ODK Client
+odk_client.py is a command line utility for interacting with the ODK Central server. It
+exposes many of the REST API calls supported by the server and allows users to perform various tasks, such as uploading and downloading attachments and submissions.
+Usage
+[-h] [-v] [-s {projects,users,delete}] [-p {forms,app-users,assignments,delete}] [-i ID] [-f FORM] [-u UUID]
+[-x {attachments,csv,submissions,upload,download,create,assignments,delete,publish}] [-a {create,delete,update,qrcode,access}] [-d DATA] [-t TIMESTAMP]
+[-b {qrcodes,update}]
+command line client for ODK Central
+Options
+ -h, --help show this help message and exit
+ -v, --verbose verbose output
+ -s {projects,users,delete}, --server {projects,users,delete}
+ project operations
+ -p {forms,app-users,assignments,delete}, --project {forms,app-users,assignments,delete}
+ project operations
+ -i ID, --id ID Project ID number
+ -f FORM, --form FORM XForm name
+ -u UUID, --uuid UUID Submission UUID, needed to download the data
+ -x {attachments,csv,submissions,upload,download,create,assignments,delete,publish}, --xform {attachments,csv,submissions,upload,download,create,assignments,delete,publish}
+ XForm ID for operations with data files
+ -a {create,delete,update,qrcode,access}, --appuser {create,delete,update,qrcode,access}
+ App-User operations
+ -d DATA, --data DATA Data files for upload or download
+ -t TIMESTAMP, --timestamp TIMESTAMP
+ Timestamp for submissions
+ -b {qrcodes,update}, --bulk {qrcodes,update}
+ Bulk operations
+
+Server requests
+Server requests allow users to access global data about projects and users.
+Usage
+The following server-specific commands are supported by ODK Client:
+
+Example usage
+ python odk_client.py --server projects
+
+
+Example usage
+ python odk_client.py --server users
+
+Project Requests
+Project requests allow users to access data for a specific project, such as XForms, attachments, and app users.
+Projects contain all the Xforms and attachments for that project. To
+access the data for a project, it is necessary to supply the project
+ID. That can be retrieved using the above server command. In this
+example, 1 is used.
+Usage
+The following are the project-specific commands supported by ODK Client:
+
+
+--id <project_id> --project forms
+This command returns a list of all the XForms contained in the specified project. Replace "<project_id>"
with the actual ID of the project you want to retrieve the forms for.
+
+
+Example usage
+ python odk_client.py --id 1 --project forms
+
+
+
+--id <project_id> --project app-users
+This command returns a list of all the app users who have access to the specified project. Replace "<project_id>"
with the actual ID of the project you want to retrieve the list of app users for.
+
+
+Example usage
+ python odk_client.py --id 1 --project app-users
+
+Note: Replace "1" with the actual ID of the project you want to access.
+
+XForm requests allow users to access data for a specific XForm within a project, such as attachments, submissions, and CSV data.
+An XForm has several components. The primary one is the XForm
+description itself. In addition to that, there may be additional
+attachments, usually a CSV file of external data to be used by the
+XForm. If an XForm has been used to collect data, then it has
+submissions for that XForm. These can be downloaded as CSV files.
+To access the data for an XForm, it is necessary to supply the project
+ID and the XForm ID. The XForm ID can be retrieved using the above
+project command.
+Usage
+The following are the XForm-specific commands supported by ODK Client:
+
+
+--id <project_id> --form <form_id> --xform attachments
+This command returns a list of all the attachments for the specified XForm. Replace "<project_id>
" with the actual ID of the project that contains the XForm, and "<form_id>
" with the actual ID of the XForm you want to retrieve the attachments for.
+
+
+Example usage
+ python odk_client.py --id 1 --form 1 --xform attachments
+
+
+
+--id <project_id> --form <form_id> --xform download <attachment_1>,<attachment_2>,...
+This command downloads the specified attachments for the specified XForm. Replace "<project_id>
" with the actual ID of the project that contains the XForm, "<form_id>
" with the actual ID of the XForm you want to download the attachments for, and "<attachment_1>
,<attachment_2>
,etc...
" with the actual names of the attachments you want to download.
+
+
+Example usage
+ python odk_client.py --id 1 --form 1 --xform download file1.csv,file2.pdf
+
+
+
+--id <project_id> --form <form_id> --xform submissions
+This command returns a list of all the submissions for the specified XForm. Replace "<project_id>
" with the actual ID of the project that contains the XForm, and "<form_id>
" with the actual ID of the XForm you want to retrieve the submissions for.
+
+
+Example usage
+ python odk_client.py --id 1 --form 1 --xform submissions
+
+
+
+--id <project_id> --form <form_id> --xform csv
+This command returns the data for the submissions for the specified XForm in CSV format. Replace "<project_id>
" with the actual ID of the project that contains the XForm, and "<form_id>
" with the actual ID of the XForm you want to retrieve the submission data for.
+
+
+Example usage
+ python odk_client.py --id 1 --form 1 --xform csv
+
+
+
+--id <project_id> --form <form_id> --xform upload <attachment_1>,<attachment_2>,...
+This command uploads the specified attachments for the specified XForm. Replace "<project_id>
" with the actual ID of the project that contains the XForm, "<form_id>
" with the actual ID of the XForm you want to upload the attachments for, and "<attachment_1>,<attachment_2>,...
" with the actual names of the attachments you want to upload.
+
+
+Example usage
+ python odk_client.py --id 1 --form 1 --xform upload file1.csv,file2.pdf
+
+Note: Replace "1" with the actual IDs of the project and XForm you want to access.
+
+These two attachments are input for select_from_file in the survey
+sheet. For osm_fieldwork, they are usually a list of municipalities and
+towns.
+./osm_fieldwork/odk_client.py --id 4 --form waterpoints --xform create osm_fieldwork/xlsforms/waterpoints.xml osm_fieldwork/xlsforms/towns.csv osm_fieldwork/xlsforms/municipality.csv
+
+
+
+Create a new XForm using the ODK XLSForm syntax. You can use any tool that supports this syntax, such as ODK Build or Excel. Save the XLSForm file as "waterpoints.xml
".
+
+
+Next, prepare two CSV files: "towns.csv
" and "municipality.csv
". These CSV files should contain the list of municipalities and towns, respectively, that will be used as input for the "select_from_file
" function in the survey sheet.
+
+
+Once you have these files ready, use the osm-fieldwork tool to convert the XLSForm and CSV files into an ODK form. To do this, open a terminal or command prompt and navigate to the "osm-fieldwork
" directory. Then, run the following command:./osm-fieldwork/odk_client.py --id 4 --form waterpoints --xform create osm-fieldwork/xlsforms/waterpoints.xml osm-fieldwork/xlsforms/towns.csv osm-fieldwork/xlsforms/municipality.csv
+
+
+
+This command will create a new form with the ID "4" and the name "waterpoints", using the XLSForm file and the two CSV files as input. The resulting ODK form can be uploaded to an ODK server for use in data collection.
+Make sure to update the file paths in the command to match the actual location of your XLSForm and CSV files. Additionally, ensure that your CSV files are properly formatted according to the ODK specifications.
+Project Requests
+List all the projects on an ODK Central server
+./osm_fieldwork/odk_client.py --server projects
+
+Delete a project from ODK Central
+./osm_fieldwork/odk_client.py --server delete --id 2
+
+App-user Requests
+Create a new app-user for a project
+./osm_fieldwork/odk_client.py --appuser create --id 4 foobar
+
+Create a QR code for the app-user to access ODK Central
+./osm_fieldwork/odk_client.py -i 4 -f waterpoints -a qrcode -u 'jhAbIwHmYCBObnR45l!I3yi$LbCL$q$saJHkDvgwgtKs2F6sso3eepySJ5tyyyAX'
+
+Delete an app-user from a project
+./osm_fieldwork/odk_client.py --appuser delete --id 4 378
+
+List all app-users for a project
+./osm_fieldwork/odk_client.py --id 4 --project app-users
+
+Bulk operations
+Some commands require multiple queries to ODK Central. As FMTM creates
+many, many app-users and xforms, it's necessary to be able to clean up
+the database sometimes, rather than go through Central for hundreds, or
+thousands of app-users.
+Delete multiple app-users from a project
+./osm_fieldwork/odk_client.py --appuser delete --id 4 22-95
+
+Generate QRcodes for all registered app-users
+./osm_fieldwork/odk_client.py --id 4 --bulk qrcodes --form waterpoints
+
+which generates a png file for each app-user, limited to that
+project.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/osm2favorites/index.html b/about/osm2favorites/index.html
new file mode 100644
index 000000000..ca19bd5e2
--- /dev/null
+++ b/about/osm2favorites/index.html
@@ -0,0 +1,1188 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ osm2favorities.py - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+osm2favorities.py
+This is a simple utility that generates a GPX file from OSM data or a
+GeoJson file for Osmand. This makes the data a available under My
+Places in the Osmand menu.This is useful for a field mapping project
+that covers a large area, but with a few small areas of interest. This
+makes them all readily available for navigation. For some features
+this program also adds Osmand styling to change the displayed icons
+and colors.
+options
+-h, --help show this help message and exit
+-v, --verbose verbose output
+-i INFILE, --infile INFILE
+ The data extract
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/osmfile/index.html b/about/osmfile/index.html
new file mode 100644
index 000000000..24867ed53
--- /dev/null
+++ b/about/osmfile/index.html
@@ -0,0 +1,1275 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Osmfile.py - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Osmfile.py
+Osmfile.py is a Python module that provides functionality for writing
+OpenStreetMap (OSM) XML format output files. It is used as part of the
+osm-fieldwork toolset, and can be used as part of a larger Python
+application. Currently it is only used by the CSVDump.py program.
+When used, osmfile.py takes a Python data structure containing OSM
+data as input and generates an OSM XML format output file. The data
+structure consists of nested Python dictionaries and lists, with each
+dictionary representing an OSM node, way or relation, and each list
+representing a set of nodes, ways or relations.
+For example, consider the following Python data structure representing
+a single OSM node:
+node = {
+ 'id': 1234,
+ 'lat': 51.5074,
+ 'lon': -0.1278,
+ 'tags': {
+ 'name': 'Big Ben',
+ 'amenity': 'clock'
+ }
+}
+
+To write this node to an OSM XML format output file using osmfile.py,
+you would first create a new osmfile.OsmWriter object, and then call
+the write()
method, passing in the node dictionary as an
+argument:
+from osmfile import OsmFile
+
+writer = OsmFile('output.osm')
+node_xml = writer.createNode(node)
+way_xml = writer.createWay(way)
+relation_xml = writer.createRelation(relation)
+writer.add_tag('1234', 'amenity', 'post_office')
+writer.write(node_xml)
+writer.write(way_xml)
+writer.write(relation_xml)
+writer.close()
+
+This would create XML code for the node, way, and relation using
+createNode(), createWay(), and createRelation() respectively. These
+methods return a string of XML code which is then written to the
+output file using writer.write() . The add_tag() method can be used to
+add additional tags to any of the elements being written to the file.
+<node id="1234" lat="51.5074" lon="-0.1278">
+<tag k="name" v="Big Ben"/>
+<tag k="amenity" v="clock"/>
+</node>
+
+Osmfile.py also provides methods for writing OSM ways and relations to
+output files, and for adding tags to existing OSM nodes, ways and
+relations.
+To write an OSM way to an output file, you would create a dictionary
+representing the way, with a nodes
key containing a list of the node
+IDs that make up the way. For example:
+way = {
+ 'id': 5678,
+ 'nodes': [1234, 5678, 9012],
+ 'tags': {
+ 'name': 'Oxford Street',
+ 'highway': 'primary'
+ }
+}
+
+writer.write_way(way)
+
+This would write the following XML code to the output file:
+<way id="5678">
+ <nd ref="1234"/>
+ <nd ref="5678"/>
+ <nd ref="9012"/>
+ <tag k="name" v="Oxford Street"/>
+ <tag k="highway" v="primary"/>
+</way>
+
+To write an OSM relation to an output file, you would create a
+dictionary representing the relation, with a members
key containing
+a list of dictionaries representing the members of the relation. Each
+member dictionary should have type
, ref
and role
keys,
+specifying the type of OSM object (node, way or relation), the ID of
+the object, and the role of the object in the relation. For example:
+relation = {
+ 'id': 7890,
+ 'members': [
+ {'type': 'way', 'ref': 5678, 'role': 'outer'},
+ {'type': 'node', 'ref': 1234, 'role': 'admin_centre'}
+ ],
+ 'tags': {
+ 'name': 'London Borough of Westminster',
+ 'type': 'boundary'
+ }
+}
+
+writer.write_relation(relation)
+
+This would write the following XML code to the output file:
+<relation id="7890">
+ <member type="way" ref="5678" role="outer"/>
+ <member type="node" ref="1234" role="admin_centre"/>
+ <tag k="name" v="London Borough of Westminster"/>
+ <tag k="type" v="boundary"/>
+</relation>
+
+In addition to writing new OSM objects to an output file, osmfile.py
+also provides methods for adding tags to existing objects. To add a
+tag to an OSM object, you would call the add_tag()
method, passing
+in the object's ID, the tag key and the tag value:
+writer.add_tag('1234', 'amenity', 'post_office')
+
+This would add the following XML code to the output file, as a child
+of the existing node
element with ID 1234
:
+<tag k="amenity" v="post_office"/>
+
+Note that the OsmWriter
class also provides methods for closing the
+output file and flushing any buffered data to disk. You should call
+the close()
method once you have finished writing all of your OSM
+data to the output file.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/programs/index.html b/about/programs/index.html
new file mode 100644
index 000000000..782e6b8a3
--- /dev/null
+++ b/about/programs/index.html
@@ -0,0 +1,1271 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ OSM Fieldwork Programs - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+OSM Fieldwork Programs
+OSM Fieldwork contains a few standalone utility programs for converting
+data from ODK Collect and the ODK Central server, and a few support
+modules. You can install from the source tree using:
+pip install .
+or you can install the package from PyPi.org:
+pip install osm-fieldwork
+
+
+The make_data_extract.py
program is used to extract OpenStreetMap
+(OSM) data for use with the select_one_from_file
function in ODK
+Collect. This function allows users to select from a list of options
+generated from an external file. The make_data_extract.py
program
+creates a data extract that can be used as an external file with the
+select_one_from_file
function. There is more detailed information on
+the program for making data extracts here .
+CSVDump.py
+CSVDump.py is program converts a CSV file downloaded from
+ODK Central to OpenStreetMap (OSM) XML format. The tool can be useful
+for users who want to work with OpenStreetMap data and want to convert
+ODK Central data into a compatible format. There is more detailed information on
+the program for converting ODK to OSM here
+parsers.py
+parsers.py is a program for conflating the OSM XML file produced
+from CSVDump.py into with the data extract. This merges tags that have
+been added or changed by ODK Collect with exiting OSM data, The result
+can be loaded into JOSM and after validation, uploaded to OSM.
+OSM Fieldwork Modules
+sqlite.py
+This module creates mbtiles or sqlitedb files for basemaps. It's just
+a wrapper around the existing sqlite3 module to create the output
+files.
+osmfile.py
+Osmfile.py is a module that writes OSM XML files for JOSM. It assumes
+the data has already been converted using CSVDump. This module is only
+used from within CSVDump.py. OSM XML format is needed as it's the only
+format that supports conflation with upstream OSM data. More on
+writing OSM XML is here .
+filter_data.py
+filter_data.py is a program for filtering data extracts. Since an
+extract can only include tags and values in the XLSform, thuis scans
+the XLSForm, and is used to remove anything not included in the choices
+sheet. While usually used as a module, if run standalone it can also
+compare an XLSForm with the taginfo database to help modify the data
+models.
+convert.py
+The convert.py module is part of the osm_fieldwork package and
+provides functionality for converting ODK forms between different
+formats using a YAML configuration file. More detailed information on
+this module is here
+yamlfile.py
+This reads in the yaml config file with all the conversion
+information into a data structure that can be used when processing the
+data conversion. More detail on this module is here .
+odk2csv.py
+Odk2csv.py is a command-line tool that is part of the osm-fieldwork
+package. Its main purpose is to convert an ODK XML
+instance file to CSV format, which can be easily imported into ODK
+Central for analysis. This is primarily only used when working
+offline, as it removes the need to access ODK Central.
+options:
+ -h, --help - show this help message and exit
+ -v, --verbose - verbose output
+ -i INSTANCE, --instance INSTANCE - The instance file from ODK Collect
+
+Works In Progress
+ODKDump.py
+ODKDump.py is a Python module that is part of the OSM-Fieldwork
+toolset for converting ODK data into various
+formats. It is used to parse the contents of an ODK Collect Instance
+file into a readable format. This module currently is not finished,
+instead use the CSVDump.py utility instead.
+
+ODKForm.py parses the XLSXForm, and creates a data structure so
+any code using this class can access the data types of each input
+field. This module currently is not finished. It turns out know the
+input data types is not probably neccesary if we stick to processing
+the CSV files.
+ODKInstance.py
+ODKInstance.py parses the ODK Collect instanceXML file, and creates a
+data structure so any code using this class can access the collected
+data values. This module currently is not finished, instead use the
+odk2csv.py utility instead.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/user-manual/index.html b/about/user-manual/index.html
new file mode 100644
index 000000000..f0cc5f3bd
--- /dev/null
+++ b/about/user-manual/index.html
@@ -0,0 +1,1505 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ User Manual - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Osm-Fieldwork User Manual
+The osm-fieldwork project is a collection of utilities useful for
+field data collection, focused on OpenStreetMap (OSM) and ODK.
+Both of these are used heavily for humanitarian and emergency
+response by many organizations. The problem is these two projects were
+never designed to work together, so this project was born to fill the
+gaps. Included are a few other useful utilities for field
+mapping.
+This project is also currently part of the backend for the
+FMTM project , but all of the
+data processing can also be run standalone, and also works fully
+offline. All the standalone programs run in a terminal, and are
+written in Python.
+ODK
+ODK is a format for
+collecting data on mobile devices, including the spatial coordinates
+of that data item. The primary source file is a spreadsheet called an
+XLSForm, which gets converted to an XForm using the
+xls2xform program. An XForm is is
+in XML format. All collected data is stored as an instance file, also
+in XML format on the mobile device, but of course is a different
+schema than the XForm. Once the data is collected it gets uploaded to
+an ODK Central server. From there you can download the collected data,
+called submissions, in CSV or JSON format. The JSON format works
+better. Here's where the conversion project starts, how to process the
+downloaded data into something we can upload to OpenStreetMap
+efficiently.
+All of the XLSForms included in this project have all been carefully
+edited to enable a good clean conversion to OSM XML. More information
+on how to modify the conversion
+is here .
+If you base any
+custom XLSForms from this library, you can also update the conversion
+criteria. These XLSForms can also be downloaded from FMTM.
+Field Mapping Tasking Manager (FMTM)
+The FMTM is a project to
+coordinate field data collection in a similar way as the HOT Tasking
+Manager. But other than the ability to break up a big area into tasks,
+the rest works very differently. Often mangaing a group doing field
+mapping is a bit like herding cats. Plus the mappers often aren't sure
+where they should be mapping, or when they are finished. In addition,
+it is now possible to load a data extract from OSM into ODK
+Collect , and use that data to
+set the default values when collecting the data so the mapper doesn't
+have to do it. FMTM handles the creation of the data extract, as well
+as processing the data into a format suitable to edit with JOSM or
+QGIS. The FMTM backend is a FastAPI wrapped around this project.
+Getting Started
+This project is available from PyPi.org, and can be installed like
+this:
+pip install osm-fieldwork
+
+It contains multiple programs, each one that handles a specific part
+of the conversion process. Each program is a single class so it can be
+used as part of a FastAPI backend, but also runs standalone for
+debugging, and working offline. These are all terminal based, as the
+website frontend is the actual GUI.
+
+json2osm
+Convert JSON from Central to OSM XML
+
+
+csv2osm
+Convert CSV from Central to OSM XML
+
+
+odk2csv
+Convert the ODK Instance to CSV
+
+
+odk2geojson
+Convert the ODK Instance to GeoJson
+
+
+parsers
+Conflate POIs from Collect with existing OSM data
+
+
+odk_client
+Remotely control an ODK Central server
+
+
+
+You can also to run the terminal based programs from the source
+tree, which can be gotten from here:
+git clone git@github.com:hotosm/osm-fieldwork.git
+
+Processing Submissions
+This section will focus on converting the JSON format, but the process
+for converting the CSV submissions is the same. The JSON format seems to
+be more complete for some XLSForms, so it's preferred. The first step
+is converting it to OSM XML format, so it can be loaded into
+JOSM and edited. A YAML based config
+file is used to convert the JSON format you just downloaded into the
+OSM XML format.
+The initial problem is neither the CSV or the JSON format stores the
+coordinates in a way any editing program wants them. So that's the
+most important part of the conversion process, generating a data file
+with spatial coordinates in the right syntax. The conversion process
+generates two output files, one in OSM XML format, the other in
+GeoJson format. The OSM XML one has had the data filtered, not
+everything collected is for OSM. But all the data goes in the GeoJson
+file, so nothing is lost. Since the GeoJson format does not have to
+follow OSM syntax, not all the tags and values may be similar to what
+OSM expects, but that's not a problem for our use case.
+The config file
+for conversion has 3 sections, one for all the
+conversion data, one for data to ignore completely, and a private
+section for the GeoJson file. The stuff to ignore is extraneous fields
+added by ODK Collect, like deviceID. Modifying the conversion is
+straight forward as it's mostly just replacing one set of strings with
+another.
+For any of the XLSForms in this project's library, the configuration
+is already done, but any custom XLSForms will need to modify it to get
+a good conversion, or fix it in JOSM later. For a one-off project,
+like an import, I usually get lazy and fix it in JOSM. But for
+anything used several times, that gets old, so it's better to improve
+the config file.
+To convert the JSON format file downloaded for ODK Central, run this
+program:
+json2osm.py -i Submissions.json
+json2osm.py -i Submissions.json -y custom.yaml
+or for the CSVfile:
+CSVDump.py -i Submissions.csv
+CSVDump.py -i Submissions.csv -u custom.yaml
+
+which produces a Submissions.osm and Submissions.geojson files from
+that data. The OSM XML file may have tags that got missed by the
+conversion process, but the advantage is now all the data can be
+viewed and edited by JOSM. If you want a clean conversion, edit the
+config file and use that as an alternate for converting the data.
+json2osm -i Submissions.json -x custom.yaml
+
+Data Conflation
+Now you have a file that can be viewed or edited, but it's all
+collected, but some of the features may already exist in OSM. This can
+be done manually in JOSM, which is ok for small datasets, but it's
+easier to apply a little automated help. It's possible to find similar
+features in OSM that are near the data we just collected for a
+building, but has the same business name. How to conflate the
+collected data with existing OSM data is
+another document .
+To just use the conflation software
+requires setting up a postgres database
+containing the OSM data for the county, region,
+state, country, ect... You can also use the data extract from FMTM, as it
+covers the same area the data was collected in. FMTM allow you to
+download the data extract used for this task. Postgres works
+much faster, but the GeoJson data extract works too as the files per
+task are relativly small.
+odk_merge.py Submissions.osm PG:"nepal" -b kathmanu.geojson
+or
+odk_merge.py Submissions.osm kathmandu.geojson
+
+In this example, the OSM XML file from the conversion process uses a local
+postgres data with the country of Nepal loaded into it. You can also
+specify an alternate boundary so the conflation will use a subset of
+the entire database to limit the amount of data that has to be
+queried.
+Each feature in the submission is queried to find any other features
+with 2 meters where any tags match. Both POIs and buildings are
+checked for a possible match. Often the building has "building=yes"
+from remote mapping, so we'd also want to merge the tags from the
+collected data into the building way. Multiple shops within the same
+building remain as a POI in that building.
+There is much more detail on this program here .
+Utility Programs
+Making basemaps
+Basemaps are very useful when using ODK Collect in areas where the map
+data is poor. Imagery is particular is very useful, as you can use
+that to select a location other than where you are standing. This
+project has a utility that makes basemaps from several sources. It
+builds a local tile store, so larger areas can be downloaded and in
+the field when offline, smaller basemaps can be made from the tile
+store. Since downloading map tiles is very time consuming, I usually
+download larger areas and let it download for a few days.
+basemapper -s esri -b Pokara.geojson -z 8-15 -o pokara.mbtiles
+
+This command will download all the map tiles from
+ESRI into an XYZ tile store for zoom levels 8
+to 15. Since downloading imagery is slow, I often download larger
+areas, and then use a subset of the tiles to make smaller
+basemaps. The mbtiles file can be manually loaded into ODK
+Collect as a layer, and used
+to adjust the location of the POI when mapping.
+Since it often useful for navigation, basemapper can also produce a
+basemap from the same map tiles for
+Osmand . This is very useful when in areas with
+little map data, for example during a remote backcountry trip. This
+example downloads Bing imagery for Pokara, Nepal.
+basemapper -s bing -b Pokara.geojson -z 8-19 -o pokara.sqlitedb
+There is much more detail on this program here .
+Converting for an Instance File
+odk2osm.py, odk2geojson.py, odk2csv.py
+These programs read the XML format used by ODK Collect for Instance
+files. Since each submission has a separate Instance file, this takes
+a regular expression, and produces a single output file. This is only
+used when working offline, so it's possible to edit the recently
+collected data and update the map data. Very useful when working
+offline during big disasters.
+odk2osm -i Highways Paths_2023-07-17\*
+
+On your phone, you can find the instance files here:
+/sdcard/Android/data/org.odk.collect.android/files/projects/[UUID]/instances
+You can also manually update your data extracts by copying them to /sdcard/Android/data/org.odk.collect.android/files/projects/[UUID]/forms/[Form name]-media/
+And manually update the XForm by copying them to
+/sdcard/Android/data/org.odk.collect.android/files/projects/[UUID]/forms/
+Managing ODK Central
+[ODK Central](https://docs.getodk.org/central-intro/ is the server
+side of ODK Collect. It's where XForms are downloaded from, and where
+submissions go after being sent by Collect. As there are a lot of
+options, this program is not very user friendly as it's primarily used
+as part of the backend for the FMTM project, and most people would
+just use the Central website.
+However, this can be useful for scripting the server. For example to
+list all the projects on a remote Central server:
+
+And this lets you download all the submissions to project number 19
+and using the XLSForm for buildings.
+odk_client -v -i 19 -f buildings -x json
+There is much more detail on this program here .
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/xlsforms/index.html b/about/xlsforms/index.html
new file mode 100644
index 000000000..1b1cc1691
--- /dev/null
+++ b/about/xlsforms/index.html
@@ -0,0 +1,1899 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ XLSForm Design - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Document Summary
+This documents the process of improving XLSXForms for better mapper
+efficiency and stability.
+Background
+XLSXForms provides a way to define input
+fields, their data types, and any constraints or validation rules that
+apply. It uses the XLSX file format and allows users to create forms by
+editing spreadsheets. It is compatible with ODK and other data
+collection platforms.
+XLSForm is a powerful tool that allows users to create complex forms
+with advanced functionality, such as conditional questions, complex
+calculations, and multimedia inputs. However, it has a complex syntax,
+and it can be difficult for new users to learn. There are a few
+web-based front-ends for creating and editing XLSForms, but they don't
+support all of the advanced features of the format.
+To use an XLSForm with a mobile app, it needs to be converted to the
+XML-based XForm format used by the apps. This conversion is done using
+a utility program called
+xls2xform which is part of the
+pyxform python package. Once the XLSForm has been converted
+to an XForm, it can be loaded onto a mobile device and used to collect
+data in the field.
+XLSForms are widely used in the humanitarian and development sectors for
+data collection, monitoring, and evaluation. It is particularly popular
+for its flexibility and the ease with which it can be customized to meet
+specific needs. XLSForm has also been adopted by other platforms, such as
+Kobo Toolbox and SurveyCTO, making it a widely used standard for creating
+forms for mobile data collection.
+The two primary mobile apps used at HOT that use XLSForms are
+OpenMapKit(OMK) , and ODK
+Collect . OMK uses the same
+XLSX format as ODK Collect or Kobo Collect, so any
+comments about improving XLSXForms apply all of them. Using OMK has been
+depreciated as it's functionality has been incorporated into ODK
+Collect. It is unmaintained, and no longer works on newer phones.
+Improving XLSXForm design can lead to more efficient data collection,
+allowing more good quality data to be collected in less time. Also
+for those of us use ODK based apps to collect data for
+OpenStreetMap(OSM) , a well designed
+XLSForm is easier to convert and upload to OSM.
+ODK
+ODK is a software suite that
+includes a mobile app called ODK Collect
+and a server called ODK
+Central . ODK Collect is designed to
+run on Android devices and enables users to collect data in the field using
+forms created in the XLSXForms format. ODK Central is a server application
+that enables users to manage forms, data, and users, as well as to visualize
+and export collected data.
+ODK Collect offers a wide range of functionality, including the ability to
+capture photos, videos, and audio recordings, and to collect GPS coordinates
+and other metadata. It also supports complex data types, such as repeat groups
+and geoshapes, and can be customized with the use of various add-ons.
+While OMK was an earlier version of the ODK Collect app, most of its functionality
+has been migrated to ODK Collect. However, this document also provides information
+on how to modify old XForms from the OMK app to work with ODK Collect. ODK Collect
+is actively maintained, with regular updates and support services provided by
+the organization behind it.
+OpenMapKit
+OpenMapKit (OMK) is an extension of ODK
+that allows users to create professional quality mobile data collection surveys
+for field data collection. The tool was designed to simplify the process of
+collecting data for OpenStreetMap (OSM) in the field.
+It was sponsored by the Red Cross and included a server and a mobile
+app that rans on Android operating system. However, the use of OMK is
+no longer recommended as it has not been maintained for several years
+and its functionality has been incorporated into ODK. It no longer
+runs on most newer phones.
+One of the unique features of OMK was the use of a special field called
+osm in the survey sheet, which is the first page of the XLSX file.
+Additionally, OMK looked at another sheet called osm which replaced
+the existing choices sheet. The values in the osm sheet were
+designed to closely match the tagging scheme used by OpenStreetMap
+(OSM) .
+Because it is important to get collected data into OSM, the
+Humanitarian OpenStreetMap Team has
+developed a project called OSM
+Fieldwork , which can
+handle the conversion from ODK formats into OSM.
+Overall, while OMK has been a useful tool in the past for data collection,
+it is no longer actively maintained, and users are encouraged to use ODK
+instead which offers more advanced functionality and support services.
+
+An XLSXForm is the source file for
+ODK based tools. This is edited in a spreadsheet program like
+LibreCalc, Excel, or Google Forms. There are also online build tools,
+but they fail to utilize the full functionality of XLSXForms. The
+program xls2xform , which is in the
+pyxform python package converts
+the spreadsheet to the format used by ODK Collect. You can also upload
+the spreadsheet to the ODK Central server, and it will convert it
+there.
+This document is just a subset of all of syntax, and focuses on the
+most commonly used ones. To really dig deep into the XLSForm syntax go
+that documentation page .
+Sheet Names
+The sheet names are predefined to have specific functionality as
+follows, and the column headers are used to determine the
+functionality of the value in the cells of the spreadsheet. The
+sheets are Survey , Choices , and Settings . A few columns
+are required to exist in each sheet, the rest are optional.
+
+
+This sheet contains all the questions used for collecting data,
+ and refers to the actual values for each question which are on the
+ choices sheet.
+
+These are the mandatory column headers in the survey sheet:
+
+Type - The type of question, the most common ones are text ,
+ select_one , select_multiple. , and select_from_file The
+ second argument in the type column is the keyword used as the
+ list_name in the choices sheet for selection menus
+Name - Refers to the name of the choice keyword that would be
+ the tag in the output OSM file
+
+Label - Refers to the question the user sees
+The name and label column headers also support different
+languages by using a postfix of
+::language appended to it, for example
+label::Nepali(np) .
+These are the optional column headers in the survey sheet:
+
+
+
+Hint - Optional value display with
+ the question with further information
+The hint column also supports different languages by using a
+ postfix of::language appended to it, for
+ example hint::Nepali(np) .
+
+
+Default - Optional default value
+ for a selection.
+Required - If the value is 1 or
+ yes , this field must have an answer. If the value is 0 or no or
+ blank, then it’s optional.
+Relevant - Allows to set up
+ conditional display of questions based on other fields.
+Appearance - This changes how
+ input fields are displayed on the screen.
+Calculation - Do a
+ calculation, used for dynamic values.
+Choice_filter - Filters choices based on other surbay answers.
+Parameters - Change the behaviour of input data, or example the size
+ of images.
+
+
+The Survey sheet has several forms of selecting answers. These allow
+the mapper to enter an interger, text, or select one or multiple items
+from a menu.
+
+The choices sheet is used to define the values used for the
+select_one and select_multiple questions on the survey
+sheet.
+The mandatory column headers are:
+
+List_name - This is the name of the list as specified in the
+ select type in the survey sheet.
+Name - This becomes the value of the tag in the OSM output file.
+Label - Refers to what is displayed in the select menu.
+The label column header also supports different languages by
+ using a postfix of ::language appended to it, for
+ example label::Nepali(np) .
+
+
+For the settings sheet, there are 1 mandatory ones, but I usually
+always add 2 of the optional ones. This is a simple sheet that
+contains the version of the sheet, and the title of the input
+form. The version is used by the server and the mobile apps to track
+changes in the data format, so it should always be updated after
+changes are made.
+
+form_title - This is what is displayed in ODK Central
+form_id - This is a unique ID to identify this XForm.
+version - This is mandatory, and needs to change after major
+ change. During development when I make many changes I usually use
+ NOW() which is the current data. Use the date format with no
+ spaces.
+
+Mapping Answers to OSM
+When designing an XForm whose data is for OSM, the two key columns
+that determine the tag & value scheme used in the OSM XML format are
+name in the survey sheet, which becomes the tag, and name in
+the choices sheet, which becomes the value. If you are using the
+OSM Fieldwork project,
+anything that isn't a one to one match with OSM syntax can be specified
+in the config
+file
+for that project. When using OSM Fieldwork, tags & values can also be
+specified as private data, which goes into a GeoJson file, and
+anything that is for OSM goes into an OSM XML file. That file can be
+edited in JOSM .
+Screen Layout
+ODK supports multiple options to change the layout of the input fields
+on the screen. In the
+XLSXForm , this is
+under the appearance column. There’s many possible options
+available to change the layout, but here’s a summary of the primary
+ones.
+
+Minimal - Answer choices appear in a pull-down menu.
+Field-list - Entire group of questions appear on one screen
+Parameter-map - Use a basemap to pick the location
+Quick - Auto-advances the form to the next question after an
+ answer is selected
+
+
+All fields are grouped together to maximize screen space.
+When the field-list attribute is set for begin_group , then
+ multiple questions are on the same screen.
+The screen can be scrolled if there are more input fields than fit.
+
+
+
+
+type
+name
+label
+appearance
+
+
+
+
+begin_group
+agroup
+Amenity Details
+field-list
+
+
+select_one text
+name
+Amenity Name
+minimal
+
+
+select_one amenity
+amenity
+Type of Amenity
+minimal
+
+
+end_group
+
+
+
+
+
+
+Conditionals
+ODK can optionally display input fields for questions based on a
+selection. Using conditionals allows for a more guided user interface,
+than just presenting many questions, some of which aren’t relevant to
+the current mapping task.
+Using Conditionals
+
+Conditionals go in the relevant column on the survey sheet.
+A conditional has two parts, the variable from the name column
+ of a question, and the value to test against, which is one of the
+ select values.
+
+In the XLSXForm, the spreadsheet should look like this. The amenity
+menu is only displayed if the answer to the “what type of building is
+this” is “commercial”.
+
+
+
+type
+name
+label
+relevant
+
+
+
+
+select_one amenity
+amenity
+Type of Amenity
+${building}=’commercial’
+
+
+
+Using conditionals allows for a more dynamic interface, as only
+relevant questions are displayed. Some questions may have answers that
+only require a few more questions before being complete. Other answers
+may generate more questions, for example a commercial building instead
+of a residence.
+Grouping
+ODK supports grouping survey questions together, which when used with
+conditionals in the relevant column, and attributes from the
+appearance column, creates a more dynamic user interface. Groups
+allow more than one question on the screen, which is more efficient
+than one question per screen, which is the default.
+Using Grouping
+
+Groups are defined in the survey sheet.
+Using the appearance column can display multiple questions on
+ each screen, minimizing the actions required to enter data.
+
+Sub groups are also supported. When implemented this way, when the top
+level group is displayed on the screen, other questions can be
+dynamically added to the screen display based on what is selected,
+further minimizing required actions. Using the appearance column
+settings with grouping can create a more efficient user
+experience. Ungrouped questions appear one on each screen of the
+mobile data collection app, requiring one to swipe to the next page
+for each question.
+
+Begin_group
+Can use the relevant column to conditionally display the entire group of questions
+
+
+End_group
+End the group of survey questions
+
+
+
+An example grouping would look like this, and the conditional says to only display this group for commercial buildings.
+
+
+
+type
+name
+label
+relevant
+
+
+
+
+select_one type
+building
+What type of building ?
+
+
+
+
+
+
+
+
+
+begin_group
+amenity
+
+${building}=’commercial’
+
+
+select_one amenity
+amenity
+Type of Amenity
+
+
+
+text
+name
+What is the name ?
+
+
+
+end_group
+
+
+
+
+
+
+In this example, the conditional is applied to the entire group of
+questions, and not just any individual question. Different questions
+in the group may have different conditionals.
+External Datasets
+XLSForms support external datasets, which is useful for common choices
+that can be shared between multiple XLSForms. CSV, XML, or GeoJson
+files are supported. The one downside is currently external datasets
+of choices do not support translations, one language only. Each CSV
+file needs a header that defines at least the name and label
+columns. The name becomes the tag in OSM, and the label is what ODK
+Collect displays in the select menu. An id column is also
+required. Anything else becomes a column in the XLSForm.
+An example CSV data file would look like this:
+
+
+
+label
+name
+backcountry
+id
+ref
+tourism
+openfire
+
+
+
+
+Test 1
+Site 1
+yes
+5483233147
+1
+camp_pitch
+yes
+
+
+Test 2
+Site 35
+no
+6764555904
+35
+camp_pitch
+yes
+
+
+
+For example, these rows in the survey sheet will load the data from
+the CSV file. The instance is the name of the data file, minus the
+suffix. The item is what the XForm has in the name column for the
+select_one_from_file. Then the last part is the column from the OSM
+data. Whenever the value of test is changed, the trigger goes off,
+and the value is recalculated and becomes the default value for the
+survey question.
+
+
+
+type
+name
+label
+calculation
+trigger
+choice
+
+
+
+
+select_one_from_file test.csv
+test
+CSV test
+
+
+true()
+
+
+calculate
+xname
+Name
+instance('test')/root/item[name=${test}]/label
+${test}
+
+
+
+text
+debug
+Name is
+${xname}
+${test}
+
+
+
+
+GeoJson Files
+An external file in GeoJson format works slightly differently, as it
+also contains GPS coordinates. This allows ODK Collect to display data
+on the map as an overlay that can be selected. This lets us make a
+data extract from OSM data and edit it. In OSM, many buildings are
+tagged building=yes , as that’s about all you can do when doing
+remote mapping off satellite imagery. ODK Collect can’t handle
+polygons yet, so a data extract has to use only POIs. To use a GeoJson
+file, just change the file name in this example. The only other
+difference is that since the GeoJson data file contains GPS
+coordinates, you can get either a map or a normal selection menu. To
+get the map view, put map in the appearance column.
+When using a GeoJson data file, after opening the XForm, you’ll get a
+button to select an existing POI. That’ll open either the menu, or the
+map. For the map view, you’ll see blue markers where the existing
+features are, Touching an icon loads that data into ODK Collect. You
+can access the values in the OSM data the same as the above example.
+OpenStreetMap Data
+OpenStreetMap (OSM) is a popular tool for mapping and collecting
+geographic data, and many OSM mappers have wanted the ability to edit
+data in the field. While mobile apps like
+StreetComplete
+or Vespucci allow for this, they don't focus
+on humanitarian data collection, which can lead to incomplete tags on
+many features. Until recently, OSM mappers collected a new point of
+interest (POI) in the field and merged the data manually later on
+using an editor like JOSM. However, with the addition of functionality
+to ODK Collect, it's now possible to load data from OSM into the app
+and use XForms to improve feature data, achieving tag completeness and
+limiting tag values to accepted values.
+In the past, if a mapper collected a new point of interest (POI) in
+the field, they would have to manually merge the data later using an
+editor like JOSM because OSM data typically had few tags beyond
+building=yes due to the majority of features being added by remote
+mapping. However, with the recent addition of functionality in ODK
+Collect, it is now possible to load data from OSM into ODK
+Collect. This allows for the use of an XForm to improve feature data,
+which achieves tag completeness for a feature and limits the tag
+values to accepted values.
+To create a data extract from OSM, one can use Overpass Turbo or
+Postgres. Each tag in OSM becomes a column in an XForm, and the column
+names are used to reference the data from within the XForm. If you are
+using the OSM data to set the default value for a
+select_one_from_file , then every possible value used for that tag
+needs to be in the choices sheet. Otherwise, you will get an error
+such as doctor is not in the choices for healthcare .
+Using OSM in ODK Collect requires two data conversion processes. The
+first step is to produce the data extract. Since the goal is to
+convert the data from ODK into OSM, OSM standard tags should be used
+in the name column in the survey and choices sheets. When doing a
+query to Overpass or Postgres, the column name will conflict with what
+is in the survey sheet, so the data extract needs to use something
+else. For Postgres, this is easy as you can use the AS command in
+the query to rename the column to whatever you want. Abbreviations or
+the OSM tag's name are often used as variable names internally, but
+the important thing is to ensure that they are unique and do not
+conflict with other names in the XForm.
+There is a much more detailed document on using OSM data extracts in
+this Dealing with External Data in ODK document.
+
+Converting from OMK to ODK
+The OMK mobile app was used for collecting location data using the GPS
+on the device, or tapping on a basemap. Because that functionality is
+now in ODK, the usage of the OMK mobile app is not required, and is
+not maintained and may be unreliable. This section is only useful if
+you find yourself with an old XForm that you want to edit and reuse,
+as none of it applies to ODK or Kobo Collect.
+Step 1 - Prepare Data
+The first step is to copy the contents of the osm sheet into the
+choices sheet, The other option is to delete the choices sheet,
+and then rename the osm sheet to choices .
+Step 2 - Migrate Questions
+The next step is to migrate the questions. The osm keyword in the
+survey sheet is followed by a variable name, for example in
+this table, building_tags is the variable. When looking at the
+choices sheet, every row using the building_tag keyword now has to
+become a question on the survey sheet.
+
+
+
+type
+name
+label
+required
+
+
+
+
+osm building_tags
+osm_building
+Building Form
+yes
+
+
+
+In the choices sheet, we see this existing data.
+
+
+
+list_name
+name
+label
+
+
+
+
+building_tags
+name
+Name of this building
+
+
+building_tags
+building:material
+What is it made from ?
+
+
+building_tags
+building:roof
+What is the roof made of ?
+
+
+
+Cut & paste these rows from the choices sheet, and paste them into
+the survey sheet. Then prefix the variable with selct_one or
+select_multiple. Drop the prefix used in the choices sheet and
+simplify it.
+
+
+
+type
+name
+label
+
+
+
+
+text
+name
+Name of this building
+
+
+select_one building:material
+material
+What is it made from ?
+
+
+select_one building:roof
+roof
+What is the roof made of ?
+
+
+
+Step 3 - Get Coordinates
+The last step is replacing the keyword that used to start OMK, with the ODK way. There are three ODK keywords that can be used to get a location.
+
+Geopoint - Collect a single location
+Geoshape - Collect at least 3 points and the ends are closed
+Geotrace - Collect a trace of a line
+
+By default these keywords only allow you to get the location of where
+the user is located. If you want to use a basemap and tap on the
+screen where you want to get the location, add placement-map into
+the appearances column.
+After doing these three steps, your XLSXForm is converted to not use
+the OMK app anymore.
+
+Since mobile data collection often involves gathering many of the same types of data,
+setting defaults helps reduce the number of user actions needed to collect data.
+When collecting multiples of the same type of data,good defaults can record data
+even when only the location has changed.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/about/xlsimages/image1.jpg b/about/xlsimages/image1.jpg
new file mode 100644
index 000000000..18d3c8c74
Binary files /dev/null and b/about/xlsimages/image1.jpg differ
diff --git a/about/xlsimages/image2.jpg b/about/xlsimages/image2.jpg
new file mode 100644
index 000000000..d52cbbf46
Binary files /dev/null and b/about/xlsimages/image2.jpg differ
diff --git a/about/xlsimages/image3.jpg b/about/xlsimages/image3.jpg
new file mode 100644
index 000000000..26d948621
Binary files /dev/null and b/about/xlsimages/image3.jpg differ
diff --git a/about/xlsimages/image4.png b/about/xlsimages/image4.png
new file mode 100644
index 000000000..bf31d71fe
Binary files /dev/null and b/about/xlsimages/image4.png differ
diff --git a/about/yamlfile/index.html b/about/yamlfile/index.html
new file mode 100644
index 000000000..c886c43ae
--- /dev/null
+++ b/about/yamlfile/index.html
@@ -0,0 +1,1258 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Yamlfile - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Yamlfile
+
+yamlfile.py
+This reads in the yaml config file with all the conversion
+information into a data structure that can be used when processing the
+data conversion.
+yamlfile.py
is a module that reads in a YAML config file containing
+information about how to convert data between different formats. The
+config file contains a list of conversion rules, where each rule
+specifies the source format, the target format, and any additional
+information needed to perform the conversion. The module parses the
+YAML file and creates a Python object representing the conversion
+rules, which can be used by other code in the conversion process.
+To use yamlfile.py
, you first need to create a YAML config file
+containing the conversion rules. Here's an example of a simple YAML
+config file that converts CSV files to ODK Collect forms:
+- source: csv
+target: odk
+settings:
+ form_id: my_form
+ form_title: My Form
+ form_version: 1.0
+ csv_delimiter: ","
+
+This rule specifies that CSV files should be converted to ODK Collect
+forms, with the specified settings. The settings
dictionary contains
+additional information needed to perform the conversion, such as the
+form ID, form title, form version, and the delimiter used in the CSV
+file.
+Once you have created the YAML config file, you can use yamlfile.py
+to read it into a Python object. Here's an example of how to use the
+read_yaml_file()
function to read the YAML config file:
+import yamlfile
+
+config_file = 'my_config.yaml'
+conversion_rules = yamlfile.read_yaml_file(config_file)
+
+This will read the my_config.yaml
file and return a Python list
+containing the conversion rules.
+You can then use the conversion rules to perform the actual data
+conversion. Here's an example of how to use the
+get_conversion_rule()
function to get the conversion rule for a
+specific source and target format:
+import yamlfile
+
+config_file = 'my_config.yaml'
+conversion_rules = yamlfile.read_yaml_file(config_file)
+
+source_format = 'csv'
+target_format = 'odk'
+conversion_rule = yamlfile.get_conversion_rule(conversion_rules, source_format, target_format)
+
+# Perform the conversion using the conversion rule
+
+This will search through the list of conversion rules for a rule that
+matches the specified source and target format, and return the
+matching rule. You can then use the conversion rule to perform the
+actual data conversion.
+Note that yamlfile.py
relies on the PyYAML library to parse the YAML
+file. If you don't have PyYAML installed, you will need to install it
+using a package manager like pip
before you can use yamlfile.py
.
+To handle errors when reading the YAML config file, yamlfile.py
+raises a YamlFileError
exception. This exception is raised if the
+YAML file is not found, if the YAML file is malformed, or if a
+required field is missing from the conversion rule. You can catch this
+exception and handle it appropriately in your code.
+Here's an example of how to catch the YamlFileError
exception:
+import yamlfile
+
+config_file = 'my_config.yaml'
+
+try:
+ conversion_rules = yamlfile.read_yaml_file(config_file)
+except yamlfile.YamlFileError as e:
+ print(f"Error reading YAML file: {str(e)}")
+
+This will catch any YamlFileError exceptions raised by
+read_yaml_file()
and print an error message.
+In summary, yamlfile.py
is a module that reads in a YAML config file
+containing conversion rules and creates a Python object representing
+the rules. This object can be used by other code in the data
+conversion process. To use yamlfile.py
, you need to create a YAML
+config file containing conversion rules, and then use the
+read_yaml_file()
function to read the file into a Python object. You
+can then use the object to get the conversion rule for a specific
+source and target format, and perform the actual data conversion.
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/ODKForm/index.html b/api/ODKForm/index.html
new file mode 100644
index 000000000..d2fef445f
--- /dev/null
+++ b/api/ODKForm/index.html
@@ -0,0 +1,1781 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODKForm - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
Support for parsing an XLS Form, currently a work in progress...
+
+
+ Source code in osm_fieldwork/ODKForm.py
+ 32
+33
+34
+35
+36
+37
+38
+39 def __init__ ( self ):
+ """Returns:
+ (ODKForm): An instance of this object.
+ """
+ self . fields = dict ()
+ self . nodesets = dict ()
+ self . groups = dict ()
+ self . ignore = ( "label" , "@appearance" , "hint" , "upload" )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Parse a select statement in XML.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ select
+
+ dict
+
+
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The data from the select
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/ODKForm.py
+ 41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63 def parseSelect (
+ self ,
+ select : dict ,
+):
+ """Parse a select statement in XML.
+
+ Args:
+ select (dict): The select in XML:
+
+ Returns:
+ (dict): The data from the select
+ """
+ print ( "parseSelect %r " % type ( select ))
+ newsel = dict ()
+ if "item" in select :
+ data = self . parseItems ( select [ "item" ])
+ ref = os . path . basename ( select [ "@ref" ])
+ for key in data :
+ if key in self . ignore :
+ continue
+ newsel [ ref ] = data
+ print ( " \t QQQQQ %r " % ( newsel ))
+ return newsel
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Parse the items in a select list.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ items
+
+ list
+
+
+
+
The select items list in XML:
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The data from the list of items
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/ODKForm.py
+ 65
+ 66
+ 67
+ 68
+ 69
+ 70
+ 71
+ 72
+ 73
+ 74
+ 75
+ 76
+ 77
+ 78
+ 79
+ 80
+ 81
+ 82
+ 83
+ 84
+ 85
+ 86
+ 87
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103
+104
+105
+106
+107 def parseItems (
+ self ,
+ items : list ,
+):
+ """Parse the items in a select list.
+
+ Args:
+ items (list): The select items list in XML:
+
+ Returns:
+ (dict): The data from the list of items
+ """
+ print ( " \t parseItems: %r : %r " % ( type ( items ), items ))
+ newitems = list ()
+ # if type(items) == OrderedDict:
+ # data = list()
+ # data.append(items)
+ # else:
+ # data = items
+
+ for values in items :
+ newitems . append ( values [ "value" ])
+
+ # if type(values) == str:
+ # continue
+
+ # val = values['label']['@ref'].replace("/data/", "")
+ # tmp = val.split('/')
+ # group = tmp[0].replace("jr:itext(\'", "")
+ # fields = len(tmp)
+ # if fields > 2:
+ # subgroup = tmp[1]
+ # label = tmp[2].replace(":label\')", "")
+ # else:
+ # subgroup = None
+ # label = tmp[1].replace(":label\')", "")
+ # # print("VALUES: %r / %r / %r" % (group, subgroup, label))
+ # if subgroup not in newdata:
+ # newdata[subgroup] = list()
+ # #newdata[subgroup].append(label)
+ # newitems.append(label)
+ # return group, subgroup, newitems
+ return newitems
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Convert the XML of a group into a data structure.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ group
+
+ dict
+
+
+
+
+
+ required
+
+
+
+
+
+
+ Source code in osm_fieldwork/ODKForm.py
+ 109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133 def parseGroup (
+ self ,
+ group : dict (),
+):
+ """Convert the XML of a group into a data structure.
+
+ Args:
+ group (dict): The group data
+ """
+ print ( " \t parseGroup %r " % ( type ( group )))
+ if type ( group ) == list :
+ for _val in group :
+ for k in group :
+ print ( " \n ZZZZ1 %r " % ( k ))
+ else : # it's a list
+ for keyword , data in group . items ():
+ # FIXME: for now,. ignore media files
+ if keyword in self . ignore :
+ continue
+ print ( "WWW3 %r , %r " % ( keyword , type ( data )))
+ # pat = re.compile('select[0-9]*')
+ # if pat.match(keyword):
+ if keyword [ 0 : 6 ] == "select" :
+ print ( "WWW4 select" )
+ self . parseSelect ( data )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/ODKInstance/index.html b/api/ODKInstance/index.html
new file mode 100644
index 000000000..aaab3b628
--- /dev/null
+++ b/api/ODKInstance/index.html
@@ -0,0 +1,1644 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODKInstance - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODKInstance
+
+
+
+
ODKInstance ( filespec = None , data = None )
+
+
+
+
+ Bases: object
+
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The filespec to the ODK XML Instance file
+
+
+
+ None
+
+
+
+ data
+
+ str
+
+
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ ODKInstance
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/ODKInstance.py
+ 35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56 def __init__ (
+ self ,
+ filespec : str = None ,
+ data : str = None ,
+):
+ """This class imports a ODK Instance file, which is in XML into a
+ data structure.
+
+ Args:
+ filespec (str): The filespec to the ODK XML Instance file
+ data (str): The XML data
+
+ Returns:
+ (ODKInstance): An instance of this object
+ """
+ self . data = data
+ self . filespec = filespec
+ self . ignore = [ "today" , "start" , "deviceid" , "nodel" , "instanceID" ]
+ if filespec :
+ self . data = self . parse ( filespec = filespec )
+ elif data :
+ self . data = self . parse ( data )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ parse
+
+
+
+
parse ( filespec , data = None )
+
+
+
+
+
Import an ODK XML Instance file ito a data structure. The input is
+either a filespec to the Instance file copied off your phone, or
+the XML that has been read in elsewhere.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The filespec to the ODK XML Instance file
+
+
+
+ required
+
+
+
+ data
+
+ str
+
+
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
All the entries in the OSM XML Instance file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/ODKInstance.py
+ 58
+ 59
+ 60
+ 61
+ 62
+ 63
+ 64
+ 65
+ 66
+ 67
+ 68
+ 69
+ 70
+ 71
+ 72
+ 73
+ 74
+ 75
+ 76
+ 77
+ 78
+ 79
+ 80
+ 81
+ 82
+ 83
+ 84
+ 85
+ 86
+ 87
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103
+104
+105 def parse (
+ self ,
+ filespec : str ,
+ data : str = None ,
+) -> dict :
+ """Import an ODK XML Instance file ito a data structure. The input is
+ either a filespec to the Instance file copied off your phone, or
+ the XML that has been read in elsewhere.
+
+ Args:
+ filespec (str): The filespec to the ODK XML Instance file
+ data (str): The XML data
+
+ Returns:
+ (dict): All the entries in the OSM XML Instance file
+ """
+ row = dict ()
+ if filespec :
+ logging . info ( "Processing instance file: %s " % filespec )
+ file = open ( filespec , "rb" )
+ # Instances are small, read the whole file
+ xml = file . read ( os . path . getsize ( filespec ))
+ elif data :
+ xml = data
+ doc = xmltodict . parse ( xml )
+
+ json . dumps ( doc )
+ tags = dict ()
+ data = doc [ "data" ]
+ flattened = flatdict . FlatDict ( data )
+ rows = list ()
+ pat = re . compile ( "[0-9.]* [0-9.-]* [0-9.]* [0-9.]*" )
+ for key , value in flattened . items ():
+ if key [ 0 ] == "@" or value is None :
+ continue
+ if re . search ( pat , value ):
+ gps = value . split ( " " )
+ row [ "lat" ] = gps [ 0 ]
+ row [ "lon" ] = gps [ 1 ]
+ continue
+
+ # print(key, value)
+ tmp = key . split ( ":" )
+ if tmp [ len ( tmp ) - 1 ] in self . ignore :
+ continue
+ row [ tmp [ len ( tmp ) - 1 ]] = value
+
+ return row
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/OdkCentral/index.html b/api/OdkCentral/index.html
new file mode 100644
index 000000000..bf45dc98c
--- /dev/null
+++ b/api/OdkCentral/index.html
@@ -0,0 +1,9888 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODK Central - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+OdkCentral
+
+
+
+
+
+
+
+
+
+
+
Download a list of submissions from ODK Central.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ project_id
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xforms
+
+ list
+
+
+
+
A list of the XForms to down the submissions from
+
+
+
+ required
+
+
+
+ odk_credentials
+
+ dict
+
+
+
+
The authentication credentials for ODK Collect
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
The submissions in JSON format
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79 def downloadThread ( project_id : int , xforms : list , odk_credentials : dict , filters : dict = None ):
+ """Download a list of submissions from ODK Central.
+
+ Args:
+ project_id (int): The ID of the project on ODK Central
+ xforms (list): A list of the XForms to down the submissions from
+ odk_credentials (dict): The authentication credentials for ODK Collect
+
+ Returns:
+ (list): The submissions in JSON format
+ """
+ timer = Timer ( text = "downloadThread() took {seconds:.0f} s" )
+ timer . start ()
+ data = list ()
+ # log.debug(f"downloadThread() called! {len(xforms)} xforms")
+ for task in xforms :
+ form = OdkForm ( odk_credentials [ "url" ], odk_credentials [ "user" ], odk_credentials [ "passwd" ])
+ # submissions = form.getSubmissions(project_id, task, 0, False, True)
+ subs = form . listSubmissions ( project_id , task , filters )
+ if not subs :
+ log . error ( f "Failed to get submissions for project ( { project_id } ) task ( { task } )" )
+ continue
+ # log.debug(f"There are {len(subs)} submissions for {task}")
+ if len ( subs [ "value" ]) > 0 :
+ data += subs [ "value" ]
+ # log.debug(f"There are {len(xforms)} Xforms, and {len(submissions)} submissions total")
+ timer . stop ()
+ return data
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ url
+
+ str
+
+
+
+
The URL of the ODK Central
+
+
+
+ None
+
+
+
+ user
+
+ str
+
+
+
+
The user's account name on ODK Central
+
+
+
+ None
+
+
+
+ passwd
+
+ str
+
+
+
+
The user's account password on ODK Central
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkCentral
+
+
+
+
An instance of this class
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 83
+ 84
+ 85
+ 86
+ 87
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """A Class for accessing an ODK Central server via it's REST API.
+
+ Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central
+
+ Returns:
+ (OdkCentral): An instance of this class
+ """
+ if not url :
+ url = os . getenv ( "ODK_CENTRAL_URL" , default = None )
+ self . url = url
+ if not user :
+ user = os . getenv ( "ODK_CENTRAL_USER" , default = None )
+ self . user = user
+ if not passwd :
+ passwd = os . getenv ( "ODK_CENTRAL_PASSWD" , default = None )
+ self . passwd = passwd
+ verify = os . getenv ( "ODK_CENTRAL_SECURE" , default = True )
+ if type ( verify ) == str :
+ self . verify = verify . lower () in ( "true" , "1" , "t" )
+ else :
+ self . verify = verify
+ # Set cert bundle path for requests in environment
+ if self . verify :
+ os . environ [ "REQUESTS_CA_BUNDLE" ] = "/etc/ssl/certs/ca-certificates.crt"
+ # These are settings used by ODK Collect
+ self . general = {
+ "form_update_mode" : "match_exactly" ,
+ "autosend" : "wifi_and_cellular" ,
+ }
+ # If there is a config file with authentication setting, use that
+ # so we don't have to supply this all the time. This is only used
+ # when odk_client is used, and no parameters are passed in.
+ if not self . url :
+ # log.debug("Configuring ODKCentral from file .odkcentral")
+ home = os . getenv ( "HOME" )
+ config = ".odkcentral"
+ filespec = home + "/" + config
+ if os . path . exists ( filespec ):
+ file = open ( filespec , "r" )
+ for line in file :
+ # Support embedded comments
+ if line [ 0 ] == "#" :
+ continue
+ # Read the config file for authentication settings
+ tmp = line . split ( "=" )
+ if tmp [ 0 ] == "url" :
+ self . url = tmp [ 1 ] . strip ( " \n " )
+ if tmp [ 0 ] == "user" :
+ self . user = tmp [ 1 ] . strip ( " \n " )
+ if tmp [ 0 ] == "passwd" :
+ self . passwd = tmp [ 1 ] . strip ( " \n " )
+ else :
+ log . warning ( f "Authentication settings missing from { filespec } " )
+ else :
+ log . debug ( f "ODKCentral configuration parsed: { self . url } " )
+ # Base URL for the REST API
+ self . version = "v1"
+ # log.debug(f"Using {self.version} API")
+ self . base = self . url + "/" + self . version + "/"
+
+ # Use a persistant connect, better for multiple requests
+ self . session = requests . Session ()
+
+ # Authentication with session token
+ self . authenticate ()
+
+ # These are just cached data from the queries
+ self . projects = dict ()
+ self . users = list ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ authenticate
+
+
+
+
authenticate ( url = None , user = None , passwd = None )
+
+
+
+
+
Setup authenticate to an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ url
+
+ str
+
+
+
+
The URL of the ODK Central
+
+
+
+ None
+
+
+
+ user
+
+ str
+
+
+
+
The user's account name on ODK Central
+
+
+
+ None
+
+
+
+ passwd
+
+ str
+
+
+
+
The user's account password on ODK Central
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ Response
+
+
+
+
A response from ODK Central after auth
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210 def authenticate (
+ self ,
+ url : str = None ,
+ user : str = None ,
+ passwd : str = None ,
+):
+ """Setup authenticate to an ODK Central server.
+
+ Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central
+
+ Returns:
+ (requests.Response): A response from ODK Central after auth
+ """
+ if not self . url :
+ self . url = url
+ if not self . user :
+ self . user = user
+ if not self . passwd :
+ self . passwd = passwd
+ # Enable persistent connection, create a cookie for this session
+ self . session . headers . update ({ "accept" : "odkcentral" })
+
+ # Get a session token
+ try :
+ response = self . session . post (
+ f " { self . base } sessions" ,
+ json = {
+ "email" : self . user ,
+ "password" : self . passwd ,
+ },
+ )
+ except requests . exceptions . ConnectionError as request_error :
+ # URL does not exist
+ raise ConnectionError ( "Failed to connect to Central. Is the URL valid?" ) from request_error
+
+ if response . status_code == 401 :
+ # Unauthorized, invalid credentials
+ raise ConnectionError ( "ODK credentials are invalid, or may have changed. Please update them." ) from None
+ elif not response . ok :
+ # Handle other errors
+ response . raise_for_status ()
+
+ self . session . headers . update ({ "Authorization" : f "Bearer { response . json () . get ( 'token' ) } " })
+
+ # Connect to the server
+ return self . session . get ( self . url , verify = self . verify )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listProjects
+
+
+
+
+
+
+
+
Fetch a list of projects from an ODK Central server, and
+store it as an indexed list.
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
A list of projects on a ODK Central server
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 212
+213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227
+228
+229 def listProjects ( self ):
+ """Fetch a list of projects from an ODK Central server, and
+ store it as an indexed list.
+
+ Returns:
+ (list): A list of projects on a ODK Central server
+ """
+ log . info ( "Getting a list of projects from %s " % self . url )
+ url = f " { self . base } projects"
+ result = self . session . get ( url , verify = self . verify )
+ projects = result . json ()
+ for project in projects :
+ if isinstance ( project , dict ):
+ if project . get ( "id" ) is not None :
+ self . projects [ project [ "id" ]] = project
+ else :
+ log . info ( "No projects returned. Is this a first run?" )
+ return projects
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createProject
+
+
+
+
+
+
+
+
Create a new project on an ODK Central server if it doesn't
+already exist.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ name
+
+ str
+
+
+
+
The name for the new project
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The response from ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262 def createProject (
+ self ,
+ name : str ,
+) -> dict :
+ """Create a new project on an ODK Central server if it doesn't
+ already exist.
+
+ Args:
+ name (str): The name for the new project
+
+ Returns:
+ (json): The response from ODK Central
+ """
+ log . debug ( f "Checking if project named { name } exists already" )
+ exists = self . findProject ( name = name )
+ if exists :
+ log . debug ( f "Project named { name } already exists." )
+ return exists
+ else :
+ url = f " { self . base } projects"
+ log . debug ( f "POSTing project { name } to { url } with verify= { self . verify } " )
+ try :
+ result = self . session . post ( url , json = { "name" : name }, verify = self . verify , timeout = 4 )
+ result . raise_for_status ()
+ except requests . exceptions . RequestException as e :
+ log . error ( e )
+ log . error ( "Failed to submit to ODKCentral" )
+ json_response = result . json ()
+ log . debug ( f "Returned: { json_response } " )
+ # update the internal list of projects
+ self . listProjects ()
+ return json_response
+
+
+
+
+
+
+
+
+
+
+
+
+
+ deleteProject
+
+
+
+
deleteProject ( project_id )
+
+
+
+
+
Delete a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ project_id
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280 def deleteProject (
+ self ,
+ project_id : int ,
+):
+ """Delete a project on an ODK Central server.
+
+ Args:
+ project_id (int): The ID of the project on ODK Central
+
+ Returns:
+ (str): The project name
+ """
+ url = f " { self . base } projects/ { project_id } "
+ self . session . delete ( url , verify = self . verify )
+ # update the internal list of projects
+ self . listProjects ()
+ return self . findProject ( project_id = project_id )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ findProject
+
+
+
+
findProject ( name = None , project_id = None )
+
+
+
+
+
Get the project data from Central.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ name
+
+ str
+
+
+
+
The name of the project
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
the project data from ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293
+294
+295
+296
+297
+298
+299
+300
+301
+302
+303
+304
+305
+306
+307
+308
+309
+310
+311 def findProject (
+ self ,
+ name : str = None ,
+ project_id : int = None ,
+):
+ """Get the project data from Central.
+
+ Args:
+ name (str): The name of the project
+
+ Returns:
+ (dict): the project data from ODK Central
+ """
+ # First, populate self.projects
+ self . listProjects ()
+
+ if self . projects :
+ if name :
+ log . debug ( f "Finding project by name: { name } " )
+ for _key , value in self . projects . items ():
+ if name == value [ "name" ]:
+ log . info ( f "ODK project found: { name } " )
+ return value
+ if project_id :
+ log . debug ( f "Finding project by id: { project_id } " )
+ for _key , value in self . projects . items ():
+ if project_id == value [ "id" ]:
+ log . info ( f "ODK project found: { project_id } " )
+ return value
+ return None
+
+
+
+
+
+
+
+
+
+
+
+
+
+ findAppUser
+
+
+
+
findAppUser ( user_id , name = None )
+
+
+
+
+
Get the data for an app user.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ user_id
+
+ int
+
+
+
+
The user ID of the app-user on ODK Central
+
+
+
+ required
+
+
+
+ name
+
+ str
+
+
+
+
The name of the app-user on ODK Central
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The data for an app-user on ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 313
+314
+315
+316
+317
+318
+319
+320
+321
+322
+323
+324
+325
+326
+327
+328
+329
+330
+331
+332
+333
+334
+335
+336
+337
+338
+339
+340
+341
+342 def findAppUser (
+ self ,
+ user_id : int ,
+ name : str = None ,
+):
+ """Get the data for an app user.
+
+ Args:
+ user_id (int): The user ID of the app-user on ODK Central
+ name (str): The name of the app-user on ODK Central
+
+ Returns:
+ (dict): The data for an app-user on ODK Central
+ """
+ if self . appusers :
+ if name is not None :
+ result = [ d for d in self . appusers if d [ "displayName" ] == name ]
+ if result :
+ return result [ 0 ]
+ else :
+ log . debug ( f "No user found with name: { name } " )
+ return None
+ if user_id is not None :
+ result = [ d for d in self . appusers if d [ "id" ] == user_id ]
+ if result :
+ return result [ 0 ]
+ else :
+ log . debug ( f "No user found with id: { user_id } " )
+ return None
+ return None
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listUsers
+
+
+
+
+
+
+
+
Fetch a list of users on the ODK Central server.
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
A list of users on ODK Central, not app-users
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 344
+345
+346
+347
+348
+349
+350
+351
+352
+353
+354 def listUsers ( self ):
+ """Fetch a list of users on the ODK Central server.
+
+ Returns:
+ (list): A list of users on ODK Central, not app-users
+ """
+ log . info ( "Getting a list of users from %s " % self . url )
+ url = self . base + "users"
+ result = self . session . get ( url , verify = self . verify )
+ self . users = result . json ()
+ return self . users
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dump
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 356
+357
+358
+359
+360
+361
+362
+363
+364
+365
+366
+367
+368
+369
+370
+371 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ # print("URL: %s" % self.url)
+ # print("User: %s" % self.user)
+ # print("Passwd: %s" % self.passwd)
+ print ( "REST URL: %s " % self . base )
+
+ print ( "There are %d projects on this server" % len ( self . projects ))
+ for id , data in self . projects . items ():
+ print ( " \t %s : %s " % ( id , data [ "name" ]))
+ if self . users :
+ print ( "There are %d users on this server" % len ( self . users ))
+ for data in self . users :
+ print ( " \t %s : %s " % ( data [ "id" ], data [ "email" ]))
+ else :
+ print ( "There are no users on this server" )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: OdkCentral
+
+
+
Class to manipulate a project on an ODK Central server.
+
+
user (str): The user's account name on ODK Central
+passwd (str): The user's account password on ODK Central.
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkProject
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 377
+378
+379
+380
+381
+382
+383
+384
+385
+386
+387
+388
+389
+390
+391
+392
+393
+394
+395
+396 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central.
+
+ Returns:
+ (OdkProject): An instance of this object
+ """
+ super () . __init__ ( url , user , passwd )
+ self . forms = list ()
+ self . submissions = list ()
+ self . data = None
+ self . appusers = None
+ self . id = None
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getData
+
+
+
+
+
+
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The data for the keyword
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 398
+399
+400
+401
+402
+403
+404
+405
+406
+407
+408 def getData (
+ self ,
+ keyword : str ,
+):
+ """Args:
+ keyword (str): The keyword to search for.
+
+ Returns:
+ (json): The data for the keyword
+ """
+ return self . data [ keyword ]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
listForms ( project_id , metadata = False )
+
+
+
+
+
Fetch a list of forms in a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ project_id
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
The list of XForms in this project
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 410
+411
+412
+413
+414
+415
+416
+417
+418
+419
+420
+421
+422
+423
+424 def listForms ( self , project_id : int , metadata : bool = False ):
+ """Fetch a list of forms in a project on an ODK Central server.
+
+ Args:
+ project_id (int): The ID of the project on ODK Central
+
+ Returns:
+ (list): The list of XForms in this project
+ """
+ url = f " { self . base } projects/ { project_id } /forms"
+ if metadata :
+ self . session . headers . update ({ "X-Extended-Metadata" : "true" })
+ result = self . session . get ( url , verify = self . verify )
+ self . forms = result . json ()
+ return self . forms
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getAllSubmissions
+
+
+
+
getAllSubmissions ( project_id , xforms = None , filters = None )
+
+
+
+
+
Fetch a list of submissions in a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ project_id
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xforms
+
+ list
+
+
+
+
The list of XForms to get the submissions of
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
All of the submissions for all of the XForm in a project
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 426
+427
+428
+429
+430
+431
+432
+433
+434
+435
+436
+437
+438
+439
+440
+441
+442
+443
+444
+445
+446
+447
+448
+449
+450
+451
+452
+453
+454
+455
+456
+457
+458
+459
+460
+461
+462
+463
+464
+465
+466
+467
+468
+469
+470
+471
+472
+473
+474
+475
+476
+477
+478 def getAllSubmissions ( self , project_id : int , xforms : list = None , filters : dict = None ):
+ """Fetch a list of submissions in a project on an ODK Central server.
+
+ Args:
+ project_id (int): The ID of the project on ODK Central
+ xforms (list): The list of XForms to get the submissions of
+
+ Returns:
+ (json): All of the submissions for all of the XForm in a project
+ """
+ # The number of threads is based on the CPU cores
+ info = get_cpu_info ()
+ self . cores = info [ "count" ]
+
+ timer = Timer ( text = "getAllSubmissions() took {seconds:.0f} s" )
+ timer . start ()
+ if not xforms :
+ xforms_data = self . listForms ( project_id )
+ xforms = [ d [ "xmlFormId" ] for d in xforms_data ]
+
+ chunk = round ( len ( xforms ) / self . cores ) if round ( len ( xforms ) / self . cores ) > 0 else 1
+ last_slice = len ( xforms ) if len ( xforms ) % chunk == 0 else len ( xforms ) - 1
+ cycle = range ( 0 , ( last_slice + chunk ) + 1 , chunk )
+ future = None
+ result = None
+ previous = 0
+ newdata = list ()
+
+ # single threaded for easier debugging
+ # for current in cycle:
+ # if previous == current:
+ # continue
+ # result = downloadThread(project_id, xforms[previous:current])
+ # previous = current
+ # newdata += result
+
+ odk_credentials = { "url" : self . url , "user" : self . user , "passwd" : self . passwd }
+
+ with concurrent . futures . ThreadPoolExecutor ( max_workers = self . cores ) as executor :
+ futures = list ()
+ for current in cycle :
+ if previous == current :
+ continue
+ result = executor . submit ( downloadThread , project_id , xforms [ previous : current ], odk_credentials , filters )
+ previous = current
+ futures . append ( result )
+ for future in concurrent . futures . as_completed ( futures ):
+ log . debug ( "Waiting for thread to complete.." )
+ data = future . result ( timeout = 10 )
+ if len ( data ) > 0 :
+ newdata += data
+ timer . stop ()
+ return newdata
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listAppUsers
+
+
+
+
listAppUsers ( projectId )
+
+
+
+
+
Fetch a list of app users for a project from an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
A list of app-users on ODK Central for this project
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 480
+481
+482
+483
+484
+485
+486
+487
+488
+489
+490
+491
+492
+493
+494
+495 def listAppUsers (
+ self ,
+ projectId : int ,
+):
+ """Fetch a list of app users for a project from an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+
+ Returns:
+ (list): A list of app-users on ODK Central for this project
+ """
+ url = f " { self . base } projects/ { projectId } /app-users"
+ result = self . session . get ( url , verify = self . verify )
+ self . appusers = result . json ()
+ return self . appusers
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listAssignments
+
+
+
+
listAssignments ( projectId )
+
+
+
+
+
List the Role & Actor assignments for users on a project.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The list of assignments
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 497
+498
+499
+500
+501
+502
+503
+504
+505
+506
+507
+508
+509
+510
+511 def listAssignments (
+ self ,
+ projectId : int ,
+):
+ """List the Role & Actor assignments for users on a project.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+
+ Returns:
+ (json): The list of assignments
+ """
+ url = f " { self . base } projects/ { projectId } /assignments"
+ result = self . session . get ( url , verify = self . verify )
+ return result . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getDetails
+
+
+
+
+
+
+
+
Get all the details for a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
Get the data about a project on ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 513
+514
+515
+516
+517
+518
+519
+520
+521
+522
+523
+524
+525
+526
+527
+528 def getDetails (
+ self ,
+ projectId : int ,
+):
+ """Get all the details for a project on an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+
+ Returns:
+ (json): Get the data about a project on ODK Central
+ """
+ url = f " { self . base } projects/ { projectId } "
+ result = self . session . get ( url , verify = self . verify )
+ self . data = result . json ()
+ return self . data
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getFullDetails
+
+
+
+
getFullDetails ( projectId )
+
+
+
+
+
Get extended details for a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
Get the data about a project on ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 530
+531
+532
+533
+534
+535
+536
+537
+538
+539
+540
+541
+542
+543
+544
+545 def getFullDetails (
+ self ,
+ projectId : int ,
+):
+ """Get extended details for a project on an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+
+ Returns:
+ (json): Get the data about a project on ODK Central
+ """
+ url = f " { self . base } projects/ { projectId } "
+ self . session . headers . update ({ "X-Extended-Metadata" : "true" })
+ result = self . session . get ( url , verify = self . verify )
+ return result . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dump
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 547
+548
+549
+550
+551
+552
+553
+554
+555
+556
+557
+558
+559
+560
+561 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ super () . dump ()
+ if self . forms :
+ print ( "There are %d forms in this project" % len ( self . forms ))
+ for data in self . forms :
+ print ( " \t %s ( %s ): %s " % ( data [ "xmlFormId" ], data [ "version" ], data [ "name" ]))
+ if self . data :
+ print ( "Project ID: %s " % self . data [ "id" ])
+ print ( "There are %d submissions in this project" % len ( self . submissions ))
+ for data in self . submissions :
+ print ( " \t %s : %s " % ( data [ "instanceId" ], data [ "createdAt" ]))
+ print ( "There are %d app users in this project" % len ( self . appusers ))
+ for data in self . appusers :
+ print ( " \t %s : %s " % ( data [ "id" ], data [ "displayName" ]))
+
+
+
+
+
+
+
+
+
+
+
+
+
+ updateReviewState
+
+
+
+
updateReviewState ( projectId , xmlFormId , instanceId , review_state )
+
+
+
+
+
Updates the review state of a submission in ODK Central.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the odk project.
+
+
+
+ required
+
+
+
+ xmlFormId
+
+ str
+
+
+
+
+
+ required
+
+
+
+ instanceId
+
+ str
+
+
+
+
The ID of the submission instance.
+
+
+
+ required
+
+
+
+ review_state
+
+ dict
+
+
+
+
The updated review state.
+
+
+
+ required
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 563
+564
+565
+566
+567
+568
+569
+570
+571
+572
+573
+574
+575
+576
+577
+578
+579 def updateReviewState ( self , projectId : int , xmlFormId : str , instanceId : str , review_state : dict ) -> dict :
+ """Updates the review state of a submission in ODK Central.
+
+ Args:
+ projectId (int): The ID of the odk project.
+ xmlFormId (str): The ID of the form.
+ instanceId (str): The ID of the submission instance.
+ review_state (dict): The updated review state.
+ """
+ try :
+ url = f " { self . base } projects/ { projectId } /forms/ { xmlFormId } /submissions/ { instanceId } "
+ result = self . session . patch ( url , json = review_state )
+ result . raise_for_status ()
+ return result . json ()
+ except Exception as e :
+ log . error ( f "Error updating review state: { e } " )
+ return {}
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: OdkCentral
+
+
+
Class to manipulate a form on an ODK Central server.
+
+
user (str): The user's account name on ODK Central
+passwd (str): The user's account password on ODK Central.
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkForm
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 585
+586
+587
+588
+589
+590
+591
+592
+593
+594
+595
+596
+597
+598
+599
+600
+601
+602
+603
+604
+605
+606
+607
+608
+609
+610 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central.
+
+ Returns:
+ (OdkForm): An instance of this object
+ """
+ super () . __init__ ( url , user , passwd )
+ self . name = None
+ # Draft is for a form that isn't published yet
+ self . draft = False
+ self . published = False
+ # this is only populated if self.getDetails() is called first.
+ self . data = {}
+ self . attach = []
+ self . media = {}
+ self . xml = None
+ self . submissions = []
+ self . appusers = {}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getDetails ( projectId , xform )
+
+
+
+
+
Get all the details for a form on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The data for this XForm
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 632
+633
+634
+635
+636
+637
+638
+639
+640
+641
+642
+643
+644
+645
+646
+647
+648
+649 def getDetails (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """Get all the details for a form on an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (json): The data for this XForm
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } "
+ result = self . session . get ( url , verify = self . verify )
+ self . data = result . json ()
+ return result
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getFullDetails ( projectId , xform )
+
+
+
+
+
Get the full details for a form on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The data for this XForm
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 651
+652
+653
+654
+655
+656
+657
+658
+659
+660
+661
+662
+663
+664
+665
+666
+667
+668 def getFullDetails (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """Get the full details for a form on an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (json): The data for this XForm
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } "
+ self . session . headers . update ({ "X-Extended-Metadata" : "true" })
+ result = self . session . get ( url , verify = self . verify )
+ return result . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
listSubmissionBasicInfo ( projectId , xform )
+
+
+
+
+
Fetch a list of submission instances basic information for a given form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The data for this XForm
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 670
+671
+672
+673
+674
+675
+676
+677
+678
+679
+680
+681
+682
+683
+684
+685
+686 def listSubmissionBasicInfo (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """Fetch a list of submission instances basic information for a given form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (json): The data for this XForm
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /submissions"
+ result = self . session . get ( url , verify = self . verify )
+ return result . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
listSubmissions ( projectId , xform , filters = None )
+
+
+
+
+
Fetch a list of submission instances for a given form.
+
Returns data in format:
+
{
+ "value":[],
+ "@odata.context": "URL/v1/projects/52/forms/103.svc/$metadata#Submissions",
+ "@odata.count":0
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The JSON of Submissions.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 688
+689
+690
+691
+692
+693
+694
+695
+696
+697
+698
+699
+700
+701
+702
+703
+704
+705
+706
+707
+708
+709
+710
+711
+712
+713
+714 def listSubmissions ( self , projectId : int , xform : str , filters : dict = None ):
+ """Fetch a list of submission instances for a given form.
+
+ Returns data in format:
+
+ {
+ "value":[],
+ "@odata.context": "URL/v1/projects/52/forms/103.svc/$metadata#Submissions",
+ "@odata.count":0
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (json): The JSON of Submissions.
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } .svc/Submissions"
+ try :
+ result = self . session . get ( url , params = filters , verify = self . verify )
+ result . raise_for_status () # Raise an error for non-2xx status codes
+ self . submissions = result . json ()
+ return self . submissions
+ except Exception as e :
+ log . error ( f "Error fetching submissions: { e } " )
+ return {}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
listAssignments ( projectId , xform )
+
+
+
+
+
List the Role & Actor assignments for users on a project.
+
Fetch a list of submission instances basic information for a given form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The data for this XForm
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 716
+717
+718
+719
+720
+721
+722
+723
+724
+725
+726
+727
+728
+729
+730
+731
+732
+733
+734 def listAssignments (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """List the Role & Actor assignments for users on a project.
+
+ Fetch a list of submission instances basic information for a given form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (json): The data for this XForm
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /assignments"
+ result = self . session . get ( url , verify = self . verify )
+ return result . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getSubmissions ( projectId , xform , submission_id , disk = False , json = True )
+
+
+
+
+
Fetch a CSV or JSON file of the submissions without media to a survey form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+ submission_id
+
+ int
+
+
+
+
The ID of the submissions to download
+
+
+
+ required
+
+
+
+ disk
+
+ bool
+
+
+
+
Whether to write the downloaded file to disk
+
+
+
+ False
+
+
+
+ json
+
+ bool
+
+
+
+
Download JSON or CSV format
+
+
+
+ True
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bytes
+
+
+
+
The list of submissions as JSON or CSV bytes object.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 736
+737
+738
+739
+740
+741
+742
+743
+744
+745
+746
+747
+748
+749
+750
+751
+752
+753
+754
+755
+756
+757
+758
+759
+760
+761
+762
+763
+764
+765
+766
+767
+768
+769
+770
+771
+772
+773
+774
+775
+776
+777
+778
+779
+780
+781
+782
+783
+784
+785
+786
+787
+788
+789 def getSubmissions (
+ self ,
+ projectId : int ,
+ xform : str ,
+ submission_id : int ,
+ disk : bool = False ,
+ json : bool = True ,
+):
+ """Fetch a CSV or JSON file of the submissions without media to a survey form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ submission_id (int): The ID of the submissions to download
+ disk (bool): Whether to write the downloaded file to disk
+ json (bool): Download JSON or CSV format
+
+ Returns:
+ (bytes): The list of submissions as JSON or CSV bytes object.
+ """
+ now = datetime . now ()
+ timestamp = f " { now . year } _ { now . hour } _ { now . minute } "
+
+ if json :
+ url = self . base + f "projects/ { projectId } /forms/ { xform } .svc/Submissions"
+ filespec = f " { xform } _ { timestamp } .json"
+ else :
+ url = self . base + f "projects/ { projectId } /forms/ { xform } /submissions"
+ filespec = f " { xform } _ { timestamp } .csv"
+
+ if submission_id :
+ url = url + f "(' { submission_id } ')"
+
+ # log.debug(f'Getting submissions for {projectId}, Form {xform}')
+ result = self . session . get (
+ url ,
+ headers = dict ({ "Content-Type" : "application/json" , "accept" : "odkcentral" }, ** self . session . headers ),
+ verify = self . verify ,
+ )
+ if result . status_code == 200 :
+ if disk :
+ # id = self.forms[0]['xmlFormId']
+ try :
+ file = open ( filespec , "xb" )
+ file . write ( result . content )
+ except FileExistsError :
+ file = open ( filespec , "wb" )
+ file . write ( result . content )
+ log . info ( "Wrote output file %s " % filespec )
+ file . close ()
+ return result . content
+ else :
+ log . error ( f "Submissions for { projectId } , Form { xform } " + " doesn't exist" )
+ return bytes ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getSubmissionMedia ( projectId , xform )
+
+
+
+
+
Fetch a ZIP file of the submissions with media to a survey form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 791
+792
+793
+794
+795
+796
+797
+798
+799
+800
+801
+802
+803
+804
+805
+806
+807 def getSubmissionMedia (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """Fetch a ZIP file of the submissions with media to a survey form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (list): The media file
+ """
+ url = self . base + f "projects/ { projectId } /forms/ { xform } /submissions.csv.zip"
+ result = self . session . get ( url , verify = self . verify )
+ return result
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getSubmissionPhoto ( projectId , instanceID , xform , filename )
+
+
+
+
+
Fetch a specific attachment by filename from a submission to a form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ instanceID(str)
+
+
+
+
+
The ID of the submission on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+ filename
+
+ str
+
+
+
+
The name of the attachment for the XForm on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bytes
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 809
+810
+811
+812
+813
+814
+815
+816
+817
+818
+819
+820
+821
+822
+823
+824
+825
+826
+827
+828
+829
+830
+831
+832
+833
+834 def getSubmissionPhoto (
+ self ,
+ projectId : int ,
+ instanceID : str ,
+ xform : str ,
+ filename : str ,
+):
+ """Fetch a specific attachment by filename from a submission to a form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ instanceID(str): The ID of the submission on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ filename (str): The name of the attachment for the XForm on ODK Central
+
+ Returns:
+ (bytes): The media data
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /submissions/ { instanceID } /attachments/ { filename } "
+ result = self . session . get ( url , verify = self . verify )
+ if result . status_code == 200 :
+ log . debug ( f "fetched { filename } from Central" )
+ else :
+ status = result . json ()
+ log . error ( f "Couldn't fetch { filename } from Central: { status [ 'message' ] } " )
+ return result . content
+
+
+
+
+
+
+
+
+
+
+
+
+
+
addMedia ( media , filespec )
+
+
+
+
+
Add a data file to this form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ media
+
+ str
+
+
+
+
+
+ required
+
+
+
+ filespec
+
+ str
+
+
+
+
the name of the media
+
+
+
+ required
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 836
+837
+838
+839
+840
+841
+842
+843
+844
+845
+846
+847
+848 def addMedia (
+ self ,
+ media : bytes ,
+ filespec : str ,
+):
+ """Add a data file to this form.
+
+ Args:
+ media (str): The media file
+ filespec (str): the name of the media
+ """
+ # FIXME: this also needs the data
+ self . media [ filespec ] = media
+
+
+
+
+
+
+
+
+
+
+
+
+
+
addXMLForm ( projectId , xmlFormId , xform )
+
+
+
+
+
Add an XML file to this form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 850
+851
+852
+853
+854
+855
+856
+857
+858
+859
+860
+861
+862 def addXMLForm (
+ self ,
+ projectId : int ,
+ xmlFormId : int ,
+ xform : str ,
+):
+ """Add an XML file to this form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ """
+ self . xml = xform
+
+
+
+
+
+
+
+
+
+
+
+
+
+
listMedia ( projectId , xform )
+
+
+
+
+
List all the attchements for this form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
A list of al the media files for this project
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 864
+865
+866
+867
+868
+869
+870
+871
+872
+873
+874
+875
+876
+877
+878
+879
+880
+881
+882
+883
+884 def listMedia (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """List all the attchements for this form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (list): A list of al the media files for this project
+ """
+ if self . draft :
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /draft/attachments"
+ else :
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /attachments"
+ result = self . session . get ( url , verify = self . verify )
+ self . media = result . json ()
+ return self . media
+
+
+
+
+
+
+
+
+
+
+
+
+
+
validateMedia ( filename )
+
+
+
+
+
Validate the specified filename is present in the XForm.
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 886
+887
+888
+889
+890
+891
+892
+893
+894
+895
+896
+897
+898
+899
+900
+901
+902
+903
+904
+905
+906
+907
+908
+909
+910
+911
+912 def validateMedia ( self , filename : str ):
+ """Validate the specified filename is present in the XForm."""
+ if not self . xml :
+ return
+ xform_filenames = []
+ namespaces = {
+ "h" : "http://www.w3.org/1999/xhtml" ,
+ "odk" : "http://www.opendatakit.org/xforms" ,
+ "xforms" : "http://www.w3.org/2002/xforms" ,
+ }
+
+ root = ElementTree . fromstring ( self . xml )
+ instances = root . findall ( ".//xforms:model/xforms:instance[@src]" , namespaces )
+
+ for inst in instances :
+ src_value = inst . attrib . get ( "src" , "" )
+ if src_value . startswith ( "jr://" ):
+ src_value = src_value [ len ( "jr://" ) :] # Remove jr:// prefix
+ if src_value . startswith ( "file/" ):
+ src_value = src_value [ len ( "file/" ) :] # Remove file/ prefix
+ xform_filenames . append ( src_value )
+
+ if filename not in xform_filenames :
+ log . error ( f "Filename ( { filename } ) is not present in XForm media: { xform_filenames } " )
+ return False
+
+ return True
+
+
+
+
+
+
+
+
+
+
+
+
+
+
uploadMedia ( projectId , form_name , data , filename = None )
+
+
+
+
+
Upload an attachement to the ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ form_name
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+ data
+
+ (str, Path , BytesIO )
+
+
+
+
The file path or BytesIO media file
+
+
+
+ required
+
+
+
+ filename
+
+ str
+
+
+
+
If BytesIO object used, provide a file name.
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+result
+ Response
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 914
+915
+916
+917
+918
+919
+920
+921
+922
+923
+924
+925
+926
+927
+928
+929
+930
+931
+932
+933
+934
+935
+936
+937
+938
+939
+940
+941
+942
+943
+944
+945
+946
+947
+948
+949
+950
+951
+952
+953
+954
+955
+956
+957
+958
+959
+960
+961
+962
+963
+964
+965
+966
+967
+968
+969
+970
+971
+972
+973
+974
+975
+976
+977
+978
+979
+980
+981
+982
+983
+984 def uploadMedia (
+ self ,
+ projectId : int ,
+ form_name : str ,
+ data : Union [ str , Path , BytesIO ],
+ filename : Optional [ str ] = None ,
+) -> Optional [ requests . Response ]:
+ """Upload an attachement to the ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ form_name (str): The XForm to get the details of from ODK Central
+ data (str, Path, BytesIO): The file path or BytesIO media file
+ filename (str): If BytesIO object used, provide a file name.
+
+ Returns:
+ result (requests.Response): The response object.
+ """
+ # BytesIO memory object
+ if isinstance ( data , BytesIO ):
+ if filename is None :
+ log . error ( "Cannot pass BytesIO object and not include the filename arg" )
+ return None
+ media = data . getvalue ()
+ # Filepath
+ elif isinstance ( data , str ) or isinstance ( data , Path ):
+ media_file_path = Path ( data )
+ if not media_file_path . exists ():
+ log . error ( f "File does not exist on disk: { data } " )
+ return None
+ with open ( media_file_path , "rb" ) as file :
+ media = file . read ()
+ filename = str ( Path ( data ) . name )
+
+ # Validate filename present in XForm
+ if self . xml :
+ if not self . validateMedia ( filename ):
+ return None
+
+ # Must first convert to draft if already published
+ if not self . draft or self . published :
+ # TODO should this use self.createForm ?
+ log . debug ( f "Updating form ( { form_name } ) to draft" )
+ url = f " { self . base } projects/ { projectId } /forms/ { form_name } /draft?ignoreWarnings=true"
+ result = self . session . post ( url , verify = self . verify )
+ if result . status_code != 200 :
+ status = result . json ()
+ log . error ( f "Couldn't modify { form_name } to draft: { status [ 'message' ] } " )
+ return None
+
+ # Upload the media
+ url = f " { self . base } projects/ { projectId } /forms/ { form_name } /draft/attachments/ { filename } "
+ log . debug ( f "Uploading media to URL: { url } " )
+ result = self . session . post (
+ url , data = media , headers = dict ({ "Content-Type" : "*/*" }, ** self . session . headers ), verify = self . verify
+ )
+
+ if result . status_code == 200 :
+ log . debug ( f "Uploaded { filename } to Central" )
+ else :
+ status = result . json ()
+ log . error ( f "Couldn't upload { filename } to Central: { status [ 'message' ] } " )
+ return None
+
+ # Publish the draft by default
+ if self . published :
+ self . publishForm ( projectId , form_name )
+
+ self . addMedia ( media , filename )
+
+ return result
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getMedia ( projectId , xform , filename )
+
+
+
+
+
Fetch a specific attachment by filename from a submission to a form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+ filename
+
+ str
+
+
+
+
The name of the attachment for the XForm on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bytes
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 986
+ 987
+ 988
+ 989
+ 990
+ 991
+ 992
+ 993
+ 994
+ 995
+ 996
+ 997
+ 998
+ 999
+1000
+1001
+1002
+1003
+1004
+1005
+1006
+1007
+1008
+1009
+1010
+1011
+1012
+1013 def getMedia (
+ self ,
+ projectId : int ,
+ xform : str ,
+ filename : str ,
+):
+ """Fetch a specific attachment by filename from a submission to a form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ filename (str): The name of the attachment for the XForm on ODK Central
+
+ Returns:
+ (bytes): The media data
+ """
+ if self . draft :
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /draft/attachments/ { filename } "
+ else :
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /attachments/ { filename } "
+ result = self . session . get ( url , verify = self . verify )
+ if result . status_code == 200 :
+ log . debug ( f "fetched { filename } from Central" )
+ else :
+ status = result . json ()
+ log . error ( f "Couldn't fetch { filename } from Central: { status [ 'message' ] } " )
+ self . addMedia ( result . content , filename )
+ return self . media
+
+
+
+
+
+
+
+
+
+
+
+
+
+
createForm ( projectId , data , form_name = None , publish = False )
+
+
+
+
+
Create a new form on an ODK Central server.
+
+If no form_name is passed, the form name is generated by default in draft.
+ If the publish param is also passed, then the form is published.
+If form_name is passed, a new form is created from this in draft state.
+ This copies across all attachments.
+
+
+
+ Note
+ The form name (xmlFormId) is generated from the id="…" attribute
+immediately inside the tag of the XForm XML.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ form_name
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ None
+
+
+
+ data
+
+ (str, Path , BytesIO )
+
+
+
+
The XForm file path, or BytesIO memory obj
+
+
+
+ required
+
+
+
+ publish
+
+ bool
+
+
+
+
If the new form should be published.
+Only valid if form_name is not passed, i.e. a new form.
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ (str, Optional )
+
+
+
+
The form name, else None if failure.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1015
+1016
+1017
+1018
+1019
+1020
+1021
+1022
+1023
+1024
+1025
+1026
+1027
+1028
+1029
+1030
+1031
+1032
+1033
+1034
+1035
+1036
+1037
+1038
+1039
+1040
+1041
+1042
+1043
+1044
+1045
+1046
+1047
+1048
+1049
+1050
+1051
+1052
+1053
+1054
+1055
+1056
+1057
+1058
+1059
+1060
+1061
+1062
+1063
+1064
+1065
+1066
+1067
+1068
+1069
+1070
+1071
+1072
+1073
+1074
+1075
+1076
+1077
+1078
+1079
+1080
+1081
+1082
+1083
+1084
+1085
+1086
+1087
+1088
+1089
+1090
+1091
+1092
+1093
+1094
+1095
+1096
+1097
+1098
+1099
+1100
+1101
+1102
+1103
+1104
+1105
+1106
+1107
+1108
+1109 def createForm (
+ self ,
+ projectId : int ,
+ data : Union [ str , Path , BytesIO ],
+ form_name : Optional [ str ] = None ,
+ publish : Optional [ bool ] = False ,
+) -> Optional [ str ]:
+ """Create a new form on an ODK Central server.
+
+ - If no form_name is passed, the form name is generated by default in draft.
+ If the publish param is also passed, then the form is published.
+ - If form_name is passed, a new form is created from this in draft state.
+ This copies across all attachments.
+
+ Note:
+ The form name (xmlFormId) is generated from the id="…" attribute
+ immediately inside the <instance> tag of the XForm XML.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ form_name (str): The XForm to get the details of from ODK Central
+ data (str, Path, BytesIO): The XForm file path, or BytesIO memory obj
+ publish (bool): If the new form should be published.
+ Only valid if form_name is not passed, i.e. a new form.
+
+ Returns:
+ (str, Optional): The form name, else None if failure.
+ """
+ # BytesIO memory object
+ if isinstance ( data , BytesIO ):
+ self . xml = data . getvalue () . decode ( "utf-8" )
+ # Filepath
+ elif isinstance ( data , str ) or isinstance ( data , Path ):
+ xml_path = Path ( data )
+ if not xml_path . exists ():
+ log . error ( f "File does not exist on disk: { data } " )
+ return None
+ # Read the XML or XLS file
+ with open ( xml_path , "rb" ) as xml_file :
+ self . xml = xml_file . read ()
+ log . debug ( "Read %d bytes from %s " % ( len ( self . xml ), data ))
+
+ if form_name or self . draft :
+ self . draft = True
+ log . debug ( f "Creating draft from template form: { form_name } " )
+ url = f " { self . base } projects/ { projectId } /forms/ { form_name } /draft?ignoreWarnings=true"
+ else :
+ # This is not a draft form, its an entirely new form (even if publish=false)
+ log . debug ( "Creating new form, with name determined from form_id field" )
+ self . published = True if publish else False
+ url = f " { self . base } projects/ { projectId } /forms?ignoreWarnings=true& { 'publish=true' if publish else '' } "
+
+ result = self . session . post (
+ url , data = self . xml , headers = dict ({ "Content-Type" : "application/xml" }, ** self . session . headers ), verify = self . verify
+ )
+
+ if result . status_code != 200 :
+ try :
+ status = result . json ()
+ msg = status . get ( "message" , "Unknown error" )
+ if result . status_code == 409 :
+ log . warning ( msg )
+ last_full_stop_index = msg . rfind ( "." )
+ last_comma_index = msg . rfind ( "," )
+ if last_full_stop_index != - 1 and last_comma_index != - 1 :
+ # Extract xmlFormId from error msg
+ xmlFormId = msg [ last_comma_index + 1 : last_full_stop_index ] . strip ()
+ return xmlFormId
+ else :
+ log . warning ( "Unable to extract xmlFormId from error message" )
+ return None
+ else :
+ log . error ( f "Couldn't create { form_name } on Central: { msg } " )
+ return None
+ except json . decoder . JSONDecodeError :
+ log . error ( f "Couldn't create { form_name } on Central: Error decoding JSON response" )
+ return None
+
+ try :
+ # Log response to terminal
+ json_data = result . json ()
+ except json . decoder . JSONDecodeError :
+ log . error ( "Could not parse response json during form creation" )
+ return None
+
+ # epdb.st()
+ # FIXME: should update self.forms with the new form
+
+ if "success" in json_data :
+ log . debug ( f "Created draft XForm on ODK server: ( { form_name } )" )
+ return form_name
+
+ new_form_name = json_data . get ( "xmlFormId" )
+ log . info ( f "Created XForm on ODK server: ( { new_form_name } )" )
+ return new_form_name
+
+
+
+
+
+
+
+
+
+
+
+
+
+
deleteForm ( projectId , xform )
+
+
+
+
+
Delete a form from an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
Returns:
+ (bool): did it get deleted
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1111
+1112
+1113
+1114
+1115
+1116
+1117
+1118
+1119
+1120
+1121
+1122
+1123
+1124
+1125
+1126
+1127
+1128
+1129
+1130
+1131
+1132
+1133
+1134
+1135
+1136
+1137
+1138
+1139
+1140
+1141
+1142
+1143
+1144
+1145
+1146
+1147
+1148 def deleteForm (
+ self ,
+ projectId : int ,
+ xform : str ,
+):
+ """Delete a form from an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ Returns:
+ (bool): did it get deleted
+ """
+ # FIXME: If your goal is to prevent it from showing up on survey clients like ODK Collect, consider
+ # setting its state to closing or closed
+ if self . draft :
+ log . debug ( f "Deleting draft form on ODK server: ( { xform } )" )
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /draft"
+ else :
+ log . debug ( f "Deleting form on ODK server: ( { xform } )" )
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } "
+
+ result = self . session . delete ( url , verify = self . verify )
+ if not result . ok :
+ try :
+ # Log response to terminal
+ json_data = result . json ()
+ log . warning ( json_data )
+ return False
+ except json . decoder . JSONDecodeError :
+ log . error ( "Could not parse response json during form deletion. " f "status_code= { result . status_code } " )
+ finally :
+ return False
+
+ self . draft = False
+ self . published = False
+
+ return True
+
+
+
+
+
+
+
+
+
+
+
+
+
+
publishForm ( projectId , xform )
+
+
+
+
+
Publish a draft form. When creating a form that isn't a draft, it can get publised then.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ int
+
+
+
+
The staus code from ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1150
+1151
+1152
+1153
+1154
+1155
+1156
+1157
+1158
+1159
+1160
+1161
+1162
+1163
+1164
+1165
+1166
+1167
+1168
+1169
+1170
+1171
+1172
+1173
+1174
+1175
+1176
+1177 def publishForm (
+ self ,
+ projectId : int ,
+ xform : str ,
+) -> int :
+ """Publish a draft form. When creating a form that isn't a draft, it can get publised then.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (int): The staus code from ODK Central
+ """
+ version = datetime . now () . strftime ( "%Y-%m- %d T%H:%M:%S. %f " )
+
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /draft/publish?version= { version } "
+ result = self . session . post ( url , verify = self . verify )
+ if result . status_code != 200 :
+ status = result . json ()
+ log . error ( f "Couldn't publish { xform } on Central: { status [ 'message' ] } " )
+ else :
+ log . info ( f "Published { xform } on Central." )
+
+ self . draft = False
+ self . published = True
+
+ return result . status_code
+
+
+
+
+
+
+
+
+
+
+
+
+
+
formFields ( projectId , xform )
+
+
+
+
+
Retrieves the form fields for a xform from odk central.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+
+
+
+
A json object containing the form fields.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1179
+1180
+1181
+1182
+1183
+1184
+1185
+1186
+1187
+1188
+1189
+1190
+1191
+1192
+1193
+1194
+1195
+1196
+1197
+1198
+1199
+1200
+1201
+1202 def formFields ( self , projectId : int , xform : str ):
+ """Retrieves the form fields for a xform from odk central.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ dict: A json object containing the form fields.
+
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /fields?odata=true"
+ response = self . session . get ( url , verify = self . verify )
+
+ # TODO wrap this logic and put in every method requiring form name
+ if response . status_code != 200 :
+ if response . status_code == 404 :
+ msg = f "The ODK form you referenced does not exist yet: { xform } "
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ log . debug ( f "Failed to retrieve form fields. Status code: { response . status_code } " )
+ response . raise_for_status ()
+
+ return response . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1204
+1205
+1206
+1207
+1208
+1209
+1210
+1211
+1212 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ # super().dump()
+ entries = len ( self . media . keys ())
+ print ( "Form has %d attachments" % entries )
+ for filename , content in self . media :
+ print ( "Filename: %s " % filename )
+ print ( "Content length: %s " % len ( content ))
+ print ( "" )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: OdkCentral
+
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ url
+
+ str
+
+
+
+
The URL of the ODK Central
+
+
+
+ None
+
+
+
+ user
+
+ str
+
+
+
+
The user's account name on ODK Central
+
+
+
+ None
+
+
+
+ passwd
+
+ str
+
+
+
+
The user's account password on ODK Central
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkAppUser
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1216
+1217
+1218
+1219
+1220
+1221
+1222
+1223
+1224
+1225
+1226
+1227
+1228
+1229
+1230
+1231
+1232
+1233
+1234
+1235 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """A Class for app user data.
+
+ Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central
+
+ Returns:
+ (OdkAppUser): An instance of this object
+ """
+ super () . __init__ ( url , user , passwd )
+ self . user = None
+ self . qrcode = None
+ self . id = None
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ create
+
+
+
+
create ( projectId , name )
+
+
+
+
+
Create a new app-user for a form.
+
Example response:
+
{
+"createdAt": "2018-04-18T23:19:14.802Z",
+"displayName": "My Display Name",
+"id": 115,
+"type": "user",
+"updatedAt": "2018-04-18T23:42:11.406Z",
+"deletedAt": "2018-04-18T23:42:11.406Z",
+"token": "d1!E2GVHgpr4h9bpxxtqUJ7EVJ1Q$Dusm2RBXg8XyVJMCBCbvyE8cGacxUx3bcUT",
+"projectId": 1
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ name
+
+ str
+
+
+
+
The name of the XForm
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The response JSON from ODK Central
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1237
+1238
+1239
+1240
+1241
+1242
+1243
+1244
+1245
+1246
+1247
+1248
+1249
+1250
+1251
+1252
+1253
+1254
+1255
+1256
+1257
+1258
+1259
+1260
+1261
+1262
+1263
+1264
+1265
+1266
+1267
+1268
+1269 def create (
+ self ,
+ projectId : int ,
+ name : str ,
+):
+ """Create a new app-user for a form.
+
+ Example response:
+
+ {
+ "createdAt": "2018-04-18T23:19:14.802Z",
+ "displayName": "My Display Name",
+ "id": 115,
+ "type": "user",
+ "updatedAt": "2018-04-18T23:42:11.406Z",
+ "deletedAt": "2018-04-18T23:42:11.406Z",
+ "token": "d1!E2GVHgpr4h9bpxxtqUJ7EVJ1Q$Dusm2RBXg8XyVJMCBCbvyE8cGacxUx3bcUT",
+ "projectId": 1
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ name (str): The name of the XForm
+
+ Returns:
+ (dict): The response JSON from ODK Central
+ """
+ url = f " { self . base } projects/ { projectId } /app-users"
+ response = self . session . post ( url , json = { "displayName" : name }, verify = self . verify )
+ self . user = name
+ if response . ok :
+ return response . json ()
+ return {}
+
+
+
+
+
+
+
+
+
+
+
+
+
+ delete
+
+
+
+
delete ( projectId , userId )
+
+
+
+
+
Create a new app-user for a form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ userId
+
+ int
+
+
+
+
The ID of the user on ODK Central to delete
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Whether the user got deleted or not
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1271
+1272
+1273
+1274
+1275
+1276
+1277
+1278
+1279
+1280
+1281
+1282
+1283
+1284
+1285
+1286
+1287 def delete (
+ self ,
+ projectId : int ,
+ userId : int ,
+):
+ """Create a new app-user for a form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ userId (int): The ID of the user on ODK Central to delete
+
+ Returns:
+ (bool): Whether the user got deleted or not
+ """
+ url = f " { self . base } projects/ { projectId } /app-users/ { userId } "
+ result = self . session . delete ( url , verify = self . verify )
+ return result
+
+
+
+
+
+
+
+
+
+
+
+
+
+ updateRole
+
+
+
+
updateRole ( projectId , xform , roleId = 2 , actorId = None )
+
+
+
+
+
Update the role of an app user for a form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+ roleId
+
+ int
+
+
+
+
The role for the user
+
+
+
+ 2
+
+
+
+ actorId
+
+ int
+
+
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Whether it was update or not
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1289
+1290
+1291
+1292
+1293
+1294
+1295
+1296
+1297
+1298
+1299
+1300
+1301
+1302
+1303
+1304
+1305
+1306
+1307
+1308
+1309
+1310 def updateRole (
+ self ,
+ projectId : int ,
+ xform : str ,
+ roleId : int = 2 ,
+ actorId : Optional [ int ] = None ,
+):
+ """Update the role of an app user for a form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ roleId (int): The role for the user
+ actorId (int): The ID of the user
+
+ Returns:
+ (bool): Whether it was update or not
+ """
+ log . info ( "Update access to XForm %s for %s " % ( xform , actorId ))
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /assignments/ { roleId } / { actorId } "
+ result = self . session . post ( url , verify = self . verify )
+ return result
+
+
+
+
+
+
+
+
+
+
+
+
+
+ grantAccess
+
+
+
+
grantAccess ( projectId , roleId = 2 , userId = None , xform = None , actorId = None )
+
+
+
+
+
Grant access to an app user for a form.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ roleId
+
+ int
+
+
+
+
+
+ 2
+
+
+
+ userId
+
+ int
+
+
+
+
The user ID of the user on ODK Central
+
+
+
+ None
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ None
+
+
+
+ actorId
+
+ int
+
+
+
+
The actor ID of the user on ODK Central
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Whether access was granted or not
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1312
+1313
+1314
+1315
+1316
+1317
+1318
+1319
+1320
+1321
+1322
+1323
+1324
+1325
+1326
+1327 def grantAccess ( self , projectId : int , roleId : int = 2 , userId : int = None , xform : str = None , actorId : int = None ):
+ """Grant access to an app user for a form.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ roleId (int): The role ID
+ userId (int): The user ID of the user on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+ actorId (int): The actor ID of the user on ODK Central
+
+ Returns:
+ (bool): Whether access was granted or not
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } /assignments/ { roleId } / { actorId } "
+ result = self . session . post ( url , verify = self . verify )
+ return result
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createQRCode
+
+
+
+
createQRCode ( odk_id , project_name , appuser_token , basemap = 'osm' , osm_username = 'svchotosm' , upstream_task_id = '' , save_qrcode = False )
+
+
+
+
+
Get the QR Code for an app-user.
+
Notes on QR code params:
+
+form_update_mode: 'manual' allows for easier offline mapping, while
+ if set to 'match_exactly', it will attempt sync with Central
+
+
+metadata_email: we 'misuse' this field to add additional metadata,
+ in this case a task id from an upstream application (FMTM).
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ odk_id
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ project_name
+
+ str
+
+
+
+
The name of the project to set
+
+
+
+ required
+
+
+
+ appuser_token
+
+ str
+
+
+
+
+
+ required
+
+
+
+ basemap
+
+ str
+
+
+
+
Default basemap to use on Collect.
+Options: "google", "mapbox", "osm", "usgs", "stamen", "carto".
+
+
+
+ 'osm'
+
+
+
+ osm_username
+
+ str
+
+
+
+
The OSM username to attribute to the mapping.
+
+
+
+ 'svchotosm'
+
+
+
+ save_qrcode
+
+ bool
+
+
+
+
Save the generated QR code to disk.
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ QRCode
+
+
+
+
segno.QRCode: The new QR code object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1329
+1330
+1331
+1332
+1333
+1334
+1335
+1336
+1337
+1338
+1339
+1340
+1341
+1342
+1343
+1344
+1345
+1346
+1347
+1348
+1349
+1350
+1351
+1352
+1353
+1354
+1355
+1356
+1357
+1358
+1359
+1360
+1361
+1362
+1363
+1364
+1365
+1366
+1367
+1368
+1369
+1370
+1371
+1372
+1373
+1374
+1375
+1376
+1377
+1378
+1379
+1380
+1381
+1382
+1383
+1384
+1385 def createQRCode (
+ self ,
+ odk_id : int ,
+ project_name : str ,
+ appuser_token : str ,
+ basemap : str = "osm" ,
+ osm_username : str = "svchotosm" ,
+ upstream_task_id : str = "" ,
+ save_qrcode : bool = False ,
+) -> segno . QRCode :
+ """Get the QR Code for an app-user.
+
+ Notes on QR code params:
+
+ - form_update_mode: 'manual' allows for easier offline mapping, while
+ if set to 'match_exactly', it will attempt sync with Central
+
+ - metadata_email: we 'misuse' this field to add additional metadata,
+ in this case a task id from an upstream application (FMTM).
+
+ Args:
+ odk_id (int): The ID of the project on ODK Central
+ project_name (str): The name of the project to set
+ appuser_token (str): The user's token
+ basemap (str): Default basemap to use on Collect.
+ Options: "google", "mapbox", "osm", "usgs", "stamen", "carto".
+ osm_username (str): The OSM username to attribute to the mapping.
+ save_qrcode (bool): Save the generated QR code to disk.
+
+ Returns:
+ segno.QRCode: The new QR code object
+ """
+ log . info ( f "Generating QR Code for project ( { odk_id } ) { project_name } " )
+
+ self . settings = {
+ "general" : {
+ "server_url" : f " { self . base } key/ { appuser_token } /projects/ { odk_id } " ,
+ "form_update_mode" : "manual" ,
+ "basemap_source" : basemap ,
+ "autosend" : "wifi_and_cellular" ,
+ "metadata_username" : osm_username ,
+ "metadata_email" : upstream_task_id ,
+ },
+ "project" : { "name" : f " { project_name } " },
+ "admin" : {},
+ }
+
+ # Base64 encode JSON params for QR code
+ qr_data = b64encode ( zlib . compress ( json . dumps ( self . settings ) . encode ( "utf-8" )))
+ # Generate QR code
+ self . qrcode = segno . make ( qr_data , micro = False )
+
+ if save_qrcode :
+ log . debug ( f "Saving QR code to { project_name } .png" )
+ self . qrcode . save ( f " { project_name } .png" , scale = 5 )
+
+ return self . qrcode
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: OdkCentral
+
+
+
Class to manipulate a Entity on an ODK Central server.
+
+
user (str): The user's account name on ODK Central
+passwd (str): The user's account password on ODK Central.
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkDataset
+
+
+
+
An instance of this object.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1391
+1392
+1393
+1394
+1395
+1396
+1397
+1398
+1399
+1400
+1401
+1402
+1403
+1404
+1405
+1406 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central.
+
+ Returns:
+ (OdkDataset): An instance of this object.
+ """
+ super () . __init__ ( url , user , passwd )
+ self . name = None
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listDatasets
+
+
+
+
listDatasets ( projectId )
+
+
+
+
+
Get all Entity datasets (entity lists) for a project.
+
JSON response:
+[
+ {
+ "name": "people",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "projectId": 1,
+ "approvalRequired": true
+ }
+]
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+list
+
+
+
+
a list of JSON dataset metadata.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1408
+1409
+1410
+1411
+1412
+1413
+1414
+1415
+1416
+1417
+1418
+1419
+1420
+1421
+1422
+1423
+1424
+1425
+1426
+1427
+1428
+1429
+1430
+1431
+1432 def listDatasets (
+ self ,
+ projectId : int ,
+):
+ """Get all Entity datasets (entity lists) for a project.
+
+ JSON response:
+ [
+ {
+ "name": "people",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "projectId": 1,
+ "approvalRequired": true
+ }
+ ]
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+
+ Returns:
+ list: a list of JSON dataset metadata.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/"
+ result = self . session . get ( url , verify = self . verify )
+ return result . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listEntities
+
+
+
+
listEntities ( projectId , datasetName )
+
+
+
+
+
Get all Entities for a project dataset (entity list).
+
JSON format:
+[
+{
+ "uuid": "uuid:85cb9aff-005e-4edd-9739-dc9c1a829c44",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "updatedAt": "2018-03-21T12:45:02.312Z",
+ "deletedAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "currentVersion": {
+ "label": "John (88)",
+ "current": true,
+ "createdAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "userAgent": "Enketo/3.0.4",
+ "version": 1,
+ "baseVersion": null,
+ "conflictingProperties": null
+ }
+}
+]
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ str
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+list
+
+
+
+
a list of JSON entity metadata, for a dataset.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1436
+1437
+1438
+1439
+1440
+1441
+1442
+1443
+1444
+1445
+1446
+1447
+1448
+1449
+1450
+1451
+1452
+1453
+1454
+1455
+1456
+1457
+1458
+1459
+1460
+1461
+1462
+1463
+1464
+1465
+1466
+1467
+1468
+1469
+1470
+1471
+1472
+1473 def listEntities (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+):
+ """Get all Entities for a project dataset (entity list).
+
+ JSON format:
+ [
+ {
+ "uuid": "uuid:85cb9aff-005e-4edd-9739-dc9c1a829c44",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "updatedAt": "2018-03-21T12:45:02.312Z",
+ "deletedAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "currentVersion": {
+ "label": "John (88)",
+ "current": true,
+ "createdAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "userAgent": "Enketo/3.0.4",
+ "version": 1,
+ "baseVersion": null,
+ "conflictingProperties": null
+ }
+ }
+ ]
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (str): The name of a dataset, specific to a project.
+
+ Returns:
+ list: a list of JSON entity metadata, for a dataset.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities"
+ response = self . session . get ( url , verify = self . verify )
+ return response . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createEntity
+
+
+
+
createEntity ( projectId , datasetName , label , data )
+
+
+
+
+
Create a new Entity in a project dataset (entity list).
+
JSON request:
+{
+"uuid": "54a405a0-53ce-4748-9788-d23a30cc3afa",
+"label": "John Doe (88)",
+"data": {
+ "firstName": "John",
+ "age": "88"
+}
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ label
+
+ str
+
+
+
+
Label for the Entity.
+
+
+
+ required
+
+
+
+ data
+
+ dict
+
+
+
+
Key:Value pairs to insert as Entity data.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+ dict
+
+
+
+
JSON of entity details.
+The 'uuid' field includes the unique entity identifier.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1475
+1476
+1477
+1478
+1479
+1480
+1481
+1482
+1483
+1484
+1485
+1486
+1487
+1488
+1489
+1490
+1491
+1492
+1493
+1494
+1495
+1496
+1497
+1498
+1499
+1500
+1501
+1502
+1503
+1504
+1505
+1506
+1507
+1508
+1509
+1510
+1511
+1512
+1513
+1514
+1515
+1516
+1517
+1518
+1519
+1520
+1521
+1522
+1523
+1524
+1525
+1526
+1527
+1528
+1529
+1530
+1531
+1532
+1533 def createEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ label : str ,
+ data : dict ,
+) -> dict :
+ """Create a new Entity in a project dataset (entity list).
+
+ JSON request:
+ {
+ "uuid": "54a405a0-53ce-4748-9788-d23a30cc3afa",
+ "label": "John Doe (88)",
+ "data": {
+ "firstName": "John",
+ "age": "88"
+ }
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ label (str): Label for the Entity.
+ data (dict): Key:Value pairs to insert as Entity data.
+
+ Returns:
+ dict: JSON of entity details.
+ The 'uuid' field includes the unique entity identifier.
+ """
+ # The CSV must contain a geometry field to work
+ # TODO also add this validation to uploadMedia if CSV format
+ required_fields = [ "geometry" ]
+ if not all ( key in data for key in required_fields ):
+ msg = "'geometry' data field is mandatory"
+ log . debug ( msg )
+ raise ValueError ( msg )
+
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities"
+ response = self . session . post (
+ url ,
+ verify = self . verify ,
+ json = {
+ "uuid" : str ( uuid4 ()),
+ "label" : label ,
+ "data" : data ,
+ },
+ )
+ if not response . ok :
+ if response . status_code == 404 :
+ msg = f "Does not exist: project ( { projectId } ) dataset ( { datasetName } )"
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ if response . status_code == 400 :
+ msg = response . json () . get ( "message" )
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ log . debug ( f "Failed to create Entity. Status code: { response . status_code } " )
+ response . raise_for_status ()
+ return response . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+ updateEntity
+
+
+
+
updateEntity ( projectId , datasetName , entityUuid , label = None , data = None , newVersion = None )
+
+
+
+
+
Update an existing Entity in a project dataset (entity list).
+
The JSON request format is the same as creating, minus the 'uuid' field.
+The PATCH will only update the specific fields specified, leaving the
+ remainder.
+
If no 'newVersion' param is provided, the entity will be force updated
+ in place.
+If 'newVersion' is provided, this must be a single integer increment
+ from the current version.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ entityUuid
+
+ str
+
+
+
+
Unique itentifier of the entity.
+
+
+
+ required
+
+
+
+ label
+
+ str
+
+
+
+
Label for the Entity.
+
+
+
+ None
+
+
+
+ data
+
+ dict
+
+
+
+
Key:Value pairs to insert as Entity data.
+
+
+
+ None
+
+
+
+ newVersion
+
+ int
+
+
+
+
Integer version to increment to (current version + 1).
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+
+
+
+
JSON of entity details.
+The 'uuid' field includes the unique entity identifier.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1535
+1536
+1537
+1538
+1539
+1540
+1541
+1542
+1543
+1544
+1545
+1546
+1547
+1548
+1549
+1550
+1551
+1552
+1553
+1554
+1555
+1556
+1557
+1558
+1559
+1560
+1561
+1562
+1563
+1564
+1565
+1566
+1567
+1568
+1569
+1570
+1571
+1572
+1573
+1574
+1575
+1576
+1577
+1578
+1579
+1580
+1581
+1582
+1583
+1584
+1585
+1586
+1587
+1588
+1589
+1590
+1591
+1592
+1593
+1594
+1595
+1596
+1597
+1598
+1599
+1600
+1601
+1602
+1603
+1604
+1605 def updateEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ entityUuid : str ,
+ label : Optional [ str ] = None ,
+ data : Optional [ dict ] = None ,
+ newVersion : Optional [ int ] = None ,
+):
+ """Update an existing Entity in a project dataset (entity list).
+
+ The JSON request format is the same as creating, minus the 'uuid' field.
+ The PATCH will only update the specific fields specified, leaving the
+ remainder.
+
+ If no 'newVersion' param is provided, the entity will be force updated
+ in place.
+ If 'newVersion' is provided, this must be a single integer increment
+ from the current version.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ entityUuid (str): Unique itentifier of the entity.
+ label (str): Label for the Entity.
+ data (dict): Key:Value pairs to insert as Entity data.
+ newVersion (int): Integer version to increment to (current version + 1).
+
+ Returns:
+ dict: JSON of entity details.
+ The 'uuid' field includes the unique entity identifier.
+ """
+ if not label and not data :
+ msg = "One of either the 'label' or 'data' fields must be passed"
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+
+ json_data = {}
+ if data :
+ json_data [ "data" ] = data
+ if label :
+ json_data [ "label" ] = label
+
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities/ { entityUuid } "
+ if newVersion :
+ url = f " { url } ?baseVersion= { newVersion - 1 } "
+ else :
+ url = f " { url } ?force=true"
+
+ log . debug ( f "Calling { url } with params { json_data } " )
+ response = self . session . patch (
+ url ,
+ verify = self . verify ,
+ json = json_data ,
+ )
+ if not response . ok :
+ if response . status_code == 404 :
+ msg = f "Does not exist: project ( { projectId } ) dataset ( { datasetName } )"
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ if response . status_code == 400 :
+ msg = response . json () . get ( "message" )
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ if response . status_code == 409 :
+ msg = response . json () . get ( "message" )
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ log . debug ( f "Failed to create Entity. Status code: { response . status_code } " )
+ response . raise_for_status ()
+ return response . json ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+ deleteEntity
+
+
+
+
deleteEntity ( projectId , datasetName , entityUuid )
+
+
+
+
+
Delete an Entity in a project dataset (entity list).
+
Only performs a soft deletion, so the Entity is actually archived.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ entityUuid
+
+ str
+
+
+
+
Unique itentifier of the entity.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+bool
+
+
+
+
Deletion successful or not.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1607
+1608
+1609
+1610
+1611
+1612
+1613
+1614
+1615
+1616
+1617
+1618
+1619
+1620
+1621
+1622
+1623
+1624
+1625
+1626
+1627
+1628
+1629
+1630
+1631
+1632
+1633
+1634
+1635
+1636
+1637
+1638
+1639
+1640
+1641
+1642 def deleteEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ entityUuid : str ,
+):
+ """Delete an Entity in a project dataset (entity list).
+
+ Only performs a soft deletion, so the Entity is actually archived.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ entityUuid (str): Unique itentifier of the entity.
+
+ Returns:
+ bool: Deletion successful or not.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities/ { entityUuid } "
+ log . debug ( f "Deleting dataset ( { datasetName } ) entity UUID ( { entityUuid } )" )
+ response = self . session . delete ( url , verify = self . verify )
+
+ if not response . ok :
+ if response . status_code == 404 :
+ msg = f "Does not exist: project ( { projectId } ) dataset ( { datasetName } ) " f "entity ( { entityUuid } )"
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ log . debug ( f "Failed to delete Entity. Status code: { response . status_code } " )
+ response . raise_for_status ()
+
+ success = ( response_msg := response . json ()) . get ( "success" , False )
+
+ if not success :
+ log . debug ( f "Server returned deletion unsuccessful: { response_msg } " )
+
+ return success
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getEntityData
+
+
+
+
getEntityData ( projectId , datasetName )
+
+
+
+
+
Get a lightweight JSON of the entity data fields in a dataset.
+
Example response JSON:
+[
+{
+ "0": {
+ "__id": "523699d0-66ec-4cfc-a76b-4617c01c6b92",
+ "label": "the_label_you_defined",
+ "__system": {
+ "createdAt": "2024-03-24T06:30:31.219Z",
+ "creatorId": "7",
+ "creatorName": "fmtm@hotosm.org",
+ "updates": 4,
+ "updatedAt": "2024-03-24T07:12:55.871Z",
+ "version": 5,
+ "conflict": null
+ },
+ "geometry": "javarosa format geometry",
+ "user_defined_field2": "text",
+ "user_defined_field2": "text",
+ "user_defined_field3": "test"
+ }
+}
+]
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+list
+
+
+
+
All entity data for a project dataset.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentral.py
+ 1644
+1645
+1646
+1647
+1648
+1649
+1650
+1651
+1652
+1653
+1654
+1655
+1656
+1657
+1658
+1659
+1660
+1661
+1662
+1663
+1664
+1665
+1666
+1667
+1668
+1669
+1670
+1671
+1672
+1673
+1674
+1675
+1676
+1677
+1678
+1679
+1680
+1681
+1682
+1683
+1684
+1685
+1686
+1687
+1688
+1689
+1690
+1691 def getEntityData (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+):
+ """Get a lightweight JSON of the entity data fields in a dataset.
+
+ Example response JSON:
+ [
+ {
+ "0": {
+ "__id": "523699d0-66ec-4cfc-a76b-4617c01c6b92",
+ "label": "the_label_you_defined",
+ "__system": {
+ "createdAt": "2024-03-24T06:30:31.219Z",
+ "creatorId": "7",
+ "creatorName": "fmtm@hotosm.org",
+ "updates": 4,
+ "updatedAt": "2024-03-24T07:12:55.871Z",
+ "version": 5,
+ "conflict": null
+ },
+ "geometry": "javarosa format geometry",
+ "user_defined_field2": "text",
+ "user_defined_field2": "text",
+ "user_defined_field3": "test"
+ }
+ }
+ ]
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+
+ Returns:
+ list: All entity data for a project dataset.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } .svc/Entities"
+ response = self . session . get ( url , verify = self . verify )
+
+ if not response . ok :
+ if response . status_code == 404 :
+ msg = f "Does not exist: project ( { projectId } ) dataset ( { datasetName } )"
+ log . debug ( msg )
+ raise requests . exceptions . HTTPError ( msg )
+ log . debug ( f "Failed to get Entity data. Status code: { response . status_code } " )
+ response . raise_for_status ()
+ return response . json () . get ( "value" , {})
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/OdkCentralAsync/index.html b/api/OdkCentralAsync/index.html
new file mode 100644
index 000000000..e1456ed0b
--- /dev/null
+++ b/api/OdkCentralAsync/index.html
@@ -0,0 +1,5130 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ODK Central (Async) - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+OdkCentral
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
Helper methods for ODK Central API.
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ url
+
+ str
+
+
+
+
The URL of the ODK Central
+
+
+
+ None
+
+
+
+ user
+
+ str
+
+
+
+
The user's account name on ODK Central
+
+
+
+ None
+
+
+
+ passwd
+
+ str
+
+
+
+
The user's account password on ODK Central
+
+
+
+ None
+
+
+
+ session
+
+ str
+
+
+
+
Pass in an existing session for reuse.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkCentral
+
+
+
+
An instance of this class
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """A Class for accessing an ODK Central server via it's REST API.
+
+ Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central
+ session (str): Pass in an existing session for reuse.
+
+ Returns:
+ (OdkCentral): An instance of this class
+ """
+ if not url :
+ url = os . getenv ( "ODK_CENTRAL_URL" , default = None )
+ self . url = url
+ if not user :
+ user = os . getenv ( "ODK_CENTRAL_USER" , default = None )
+ self . user = user
+ if not passwd :
+ passwd = os . getenv ( "ODK_CENTRAL_PASSWD" , default = None )
+ self . passwd = passwd
+ verify = os . getenv ( "ODK_CENTRAL_SECURE" , default = True )
+ if type ( verify ) == str :
+ self . verify = verify . lower () in ( "true" , "1" , "t" )
+ else :
+ self . verify = verify
+
+ # Base URL for the REST API
+ self . version = "v1"
+ self . base = f " { self . url } / { self . version } /"
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ authenticate
+
+
+
+ async
+
+
+
+
+
+
+
+
Authenticate to an ODK Central server.
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113 async def authenticate ( self ):
+ """Authenticate to an ODK Central server."""
+ try :
+ async with self . session . post ( f " { self . base } sessions" , json = { "email" : self . user , "password" : self . passwd }) as response :
+ token = ( await response . json ())[ "token" ]
+ self . session . headers . update ({ "Authorization" : f "Bearer { token } " })
+ except aiohttp . ClientConnectorError as request_error :
+ await self . session . close ()
+ raise ConnectionError ( "Failed to connect to Central. Is the URL valid?" ) from request_error
+ except aiohttp . ClientResponseError as response_error :
+ await self . session . close ()
+ if response_error . status == 401 :
+ raise ConnectionError ( "ODK credentials are invalid, or may have changed. Please update them." ) from response_error
+ raise response_error
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: OdkCentral
+
+
+
Class to manipulate a project on an ODK Central server.
+
+
user (str): The user's account name on ODK Central
+passwd (str): The user's account password on ODK Central.
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkProject
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+):
+ """Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central.
+
+ Returns:
+ (OdkProject): An instance of this object
+ """
+ super () . __init__ ( url , user , passwd )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
listForms ( projectId , metadata = False )
+
+
+
+
+
Fetch a list of forms in a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
The list of XForms in this project
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155 async def listForms ( self , projectId : int , metadata : bool = False ):
+ """Fetch a list of forms in a project on an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+
+ Returns:
+ (list): The list of XForms in this project
+ """
+ url = f " { self . base } projects/ { projectId } /forms"
+ headers = {}
+ if metadata :
+ headers . update ({ "X-Extended-Metadata" : "true" })
+ try :
+ async with self . session . get ( url , ssl = self . verify , headers = headers ) as response :
+ self . forms = await response . json ()
+ return self . forms
+ except aiohttp . ClientError as e :
+ msg = f "Error fetching forms: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listSubmissions
+
+
+
+ async
+
+
+
+
listSubmissions ( projectId , xform , filters = None )
+
+
+
+
+
Fetch a list of submission instances for a given form.
+
Returns data in format:
+
{
+ "value":[],
+ "@odata.context": "URL/v1/projects/52/forms/103.svc/$metadata#Submissions",
+ "@odata.count":0
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xform
+
+ str
+
+
+
+
The XForm to get the details of from ODK Central
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
The JSON of Submissions.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182 async def listSubmissions ( self , projectId : int , xform : str , filters : dict = None ):
+ """Fetch a list of submission instances for a given form.
+
+ Returns data in format:
+
+ {
+ "value":[],
+ "@odata.context": "URL/v1/projects/52/forms/103.svc/$metadata#Submissions",
+ "@odata.count":0
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xform (str): The XForm to get the details of from ODK Central
+
+ Returns:
+ (json): The JSON of Submissions.
+ """
+ url = f " { self . base } projects/ { projectId } /forms/ { xform } .svc/Submissions"
+ try :
+ async with self . session . get ( url , params = filters , ssl = self . verify ) as response :
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Error fetching submissions: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getAllProjectSubmissions
+
+
+
+ async
+
+
+
+
getAllProjectSubmissions ( projectId , xforms = None , filters = None )
+
+
+
+
+
Fetch a list of submissions in a project on an ODK Central server.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central
+
+
+
+ required
+
+
+
+ xforms
+
+ list
+
+
+
+
The list of XForms to get the submissions of
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ json
+
+
+
+
All of the submissions for all of the XForm in a project
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207 async def getAllProjectSubmissions ( self , projectId : int , xforms : list = None , filters : dict = None ):
+ """Fetch a list of submissions in a project on an ODK Central server.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central
+ xforms (list): The list of XForms to get the submissions of
+
+ Returns:
+ (json): All of the submissions for all of the XForm in a project
+ """
+ log . info ( f "Getting all submissions for ODK project ( { projectId } ) forms ( { xforms } )" )
+ submission_data = []
+
+ submission_tasks = [ self . listSubmissions ( projectId , task , filters ) for task in xforms ]
+ submissions = await gather ( * submission_tasks , return_exceptions = True )
+
+ for submission in submissions :
+ if isinstance ( submission , Exception ):
+ log . error ( f "Failed to get submissions: { submission } " )
+ continue
+ log . debug ( f "There are { len ( submission [ 'value' ]) } submissions" )
+ submission_data . extend ( submission [ "value" ])
+
+ return submission_data
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: OdkCentral
+
+
+
Class to manipulate a Entity on an ODK Central server.
+
+
user (str): The user's account name on ODK Central
+passwd (str): The user's account password on ODK Central.
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OdkDataset
+
+
+
+
An instance of this object.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227 def __init__ (
+ self ,
+ url : Optional [ str ] = None ,
+ user : Optional [ str ] = None ,
+ passwd : Optional [ str ] = None ,
+) -> None :
+ """Args:
+ url (str): The URL of the ODK Central
+ user (str): The user's account name on ODK Central
+ passwd (str): The user's account password on ODK Central.
+
+ Returns:
+ (OdkDataset): An instance of this object.
+ """
+ super () . __init__ ( url , user , passwd )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listDatasets
+
+
+
+ async
+
+
+
+
listDatasets ( projectId )
+
+
+
+
+
Get all Entity datasets (entity lists) for a project.
+
JSON response:
+[
+ {
+ "name": "people",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "projectId": 1,
+ "approvalRequired": true
+ }
+]
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+list
+ list
+
+
+
+
a list of JSON dataset metadata.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 229
+230
+231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251
+252
+253
+254
+255
+256
+257
+258 async def listDatasets (
+ self ,
+ projectId : int ,
+) -> list :
+ """Get all Entity datasets (entity lists) for a project.
+
+ JSON response:
+ [
+ {
+ "name": "people",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "projectId": 1,
+ "approvalRequired": true
+ }
+ ]
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+
+ Returns:
+ list: a list of JSON dataset metadata.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/"
+ try :
+ async with self . session . get ( url , ssl = self . verify ) as response :
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Error fetching datasets: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createDataset
+
+
+
+ async
+
+
+
+
createDataset ( projectId , datasetName = 'features' , properties = [])
+
+
+
+
+
Creates a dataset for a given project.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project to create the dataset for.
+
+
+
+ required
+
+
+
+ datasetName
+
+ str
+
+
+
+
The name of the dataset to be created.
+
+
+
+ 'features'
+
+
+
+ properties
+
+ list[str]
+
+
+
+
List of property names to create.
+Alternatively call createDatasetProperty for each property manually.
+
+
+
+ []
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+
+
+
+
The JSON response containing information about the created dataset.
+
+
+
+
+
+
+
+
+
Raises:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ ClientError
+
+
+
+
If an error occurs during the dataset creation process.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293
+294
+295
+296
+297
+298
+299
+300
+301
+302
+303
+304
+305
+306
+307
+308
+309
+310
+311
+312
+313
+314
+315
+316
+317
+318
+319
+320
+321
+322
+323
+324
+325 async def createDataset (
+ self ,
+ projectId : int ,
+ datasetName : Optional [ str ] = "features" ,
+ properties : Optional [ list [ str ]] = [],
+):
+ """Creates a dataset for a given project.
+
+ Args:
+ projectId (int): The ID of the project to create the dataset for.
+ datasetName (str): The name of the dataset to be created.
+ properties (list[str]): List of property names to create.
+ Alternatively call createDatasetProperty for each property manually.
+
+ Returns:
+ dict: The JSON response containing information about the created dataset.
+
+ Raises:
+ aiohttp.ClientError: If an error occurs during the dataset creation process.
+ """
+ # Validation of properties param
+ if properties and ( not isinstance ( properties , list ) or not isinstance ( properties [ - 1 ], str )):
+ msg = "The properties must be a list of string values to create a dataset"
+ log . error ( msg )
+ raise ValueError ( msg )
+
+ # Create the dataset
+ url = f " { self . base } projects/ { projectId } /datasets"
+ payload = { "name" : datasetName }
+ try :
+ log . info ( f "Creating dataset ( { datasetName } ) for ODK project ( { projectId } )" )
+ async with self . session . post (
+ url ,
+ ssl = self . verify ,
+ json = payload ,
+ ) as response :
+ if response . status not in ( 200 , 201 ):
+ error_message = await response . text ()
+ log . error ( f "Failed to create Dataset: { error_message } " )
+ log . info ( f "Successfully created Dataset { datasetName } " )
+ dataset = await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Failed to create Dataset: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+ if not properties :
+ return dataset
+
+ # Add the properties, if specified
+ # FIXME this is a bit of a hack until ODK Central has better support
+ # FIXME for adding dataset properties in bulk
+ try :
+ log . debug ( f "Creating properties for dataset ( { datasetName } ): { properties } " )
+ properties_tasks = [ self . createDatasetProperty ( projectId , field , datasetName ) for field in properties ]
+ success = await gather ( * properties_tasks , return_exceptions = True ) # type: ignore
+ if not success :
+ log . warning ( f "No properties were uploaded for ODK project ( { projectId } ) dataset name ( { datasetName } )" )
+ except aiohttp . ClientError as e :
+ msg = f "Failed to create properties: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+ # Manually append to prevent another API call
+ dataset [ "properties" ] = properties
+ return dataset
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createDatasetProperty
+
+
+
+ async
+
+
+
+
createDatasetProperty ( projectId , field_name , datasetName = 'features' )
+
+
+
+
+
Create a property for a dataset.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project.
+
+
+
+ required
+
+
+
+ datasetName
+
+ str
+
+
+
+
The name of the dataset.
+
+
+
+ 'features'
+
+
+
+ field
+
+ dict
+
+
+
+
A dictionary containing the field information.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+
+
+
+
The response data from the API.
+
+
+
+
+
+
+
+
+
Raises:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ ClientError
+
+
+
+
If an error occurs during the API request.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 327
+328
+329
+330
+331
+332
+333
+334
+335
+336
+337
+338
+339
+340
+341
+342
+343
+344
+345
+346
+347
+348
+349
+350
+351
+352
+353
+354
+355
+356
+357
+358
+359
+360
+361
+362 async def createDatasetProperty (
+ self ,
+ projectId : int ,
+ field_name : str ,
+ datasetName : Optional [ str ] = "features" ,
+):
+ """Create a property for a dataset.
+
+ Args:
+ projectId (int): The ID of the project.
+ datasetName (str): The name of the dataset.
+ field (dict): A dictionary containing the field information.
+
+ Returns:
+ dict: The response data from the API.
+
+ Raises:
+ aiohttp.ClientError: If an error occurs during the API request.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /properties"
+ payload = {
+ "name" : field_name ,
+ }
+
+ try :
+ log . debug ( f "Creating property ( { field_name } ) for dataset { datasetName } " )
+ async with self . session . post ( url , ssl = self . verify , json = payload ) as response :
+ response_data = await response . json ()
+ if response . status not in ( 200 , 201 ):
+ log . warning ( f "Failed to create properties: { response . status } , message=' { response_data } '" )
+ log . debug ( f "Successfully created property ( { field_name } ) for dataset { datasetName } " )
+ return response_data
+ except aiohttp . ClientError as e :
+ msg = f "Failed to create properties: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ listEntities
+
+
+
+ async
+
+
+
+
listEntities ( projectId , datasetName )
+
+
+
+
+
Get all Entities for a project dataset (entity list).
+
JSON format:
+[
+{
+ "uuid": "uuid:85cb9aff-005e-4edd-9739-dc9c1a829c44",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "updatedAt": "2018-03-21T12:45:02.312Z",
+ "deletedAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "currentVersion": {
+ "label": "John (88)",
+ "data": {
+ "field1": "value1"
+ },
+ "current": true,
+ "createdAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "userAgent": "Enketo/3.0.4",
+ "version": 1,
+ "baseVersion": null,
+ "conflictingProperties": null
+ }
+}
+]
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ str
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+list
+ list
+
+
+
+
a list of JSON entity metadata, for a dataset.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 364
+365
+366
+367
+368
+369
+370
+371
+372
+373
+374
+375
+376
+377
+378
+379
+380
+381
+382
+383
+384
+385
+386
+387
+388
+389
+390
+391
+392
+393
+394
+395
+396
+397
+398
+399
+400
+401
+402
+403
+404
+405
+406
+407
+408
+409 async def listEntities (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+) -> list :
+ """Get all Entities for a project dataset (entity list).
+
+ JSON format:
+ [
+ {
+ "uuid": "uuid:85cb9aff-005e-4edd-9739-dc9c1a829c44",
+ "createdAt": "2018-01-19T23:58:03.395Z",
+ "updatedAt": "2018-03-21T12:45:02.312Z",
+ "deletedAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "currentVersion": {
+ "label": "John (88)",
+ "data": {
+ "field1": "value1"
+ },
+ "current": true,
+ "createdAt": "2018-03-21T12:45:02.312Z",
+ "creatorId": 1,
+ "userAgent": "Enketo/3.0.4",
+ "version": 1,
+ "baseVersion": null,
+ "conflictingProperties": null
+ }
+ }
+ ]
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (str): The name of a dataset, specific to a project.
+
+ Returns:
+ list: a list of JSON entity metadata, for a dataset.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities"
+ try :
+ async with self . session . get ( url , ssl = self . verify ) as response :
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Error fetching entities: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getEntity
+
+
+
+ async
+
+
+
+
getEntity ( projectId , datasetName , entityUuid )
+
+
+
+
+
Get a single Entity by it's UUID for a project.
+
JSON response:
+{
+"uuid": "a54400b6-49fe-4787-9ab8-7e2f56ff52bc",
+"createdAt": "2024-04-15T09:26:08.209Z",
+"creatorId": 5,
+"updatedAt": null,
+"deletedAt": null,
+"conflict": null,
+"currentVersion": {
+ "createdAt": "2024-04-15T09:26:08.209Z",
+ "current": true,
+ "label": "test entity",
+ "creatorId": 5,
+ "userAgent": "Python/3.10 aiohttp/3.9.3",
+ "data": {
+ "osm_id": "1",
+ "geometry": "test"
+ },
+ "version": 1,
+ "baseVersion": null,
+ "dataReceived": {
+ "label": "test entity",
+ "osm_id": "1",
+ "geometry": "test"
+ },
+ "conflictingProperties": null
+}
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ str
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ entityUuid
+
+ str
+
+
+
+
Unique itentifier of the entity in the dataset.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+ dict
+
+
+
+
the JSON entity details, for a specific dataset.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 411
+412
+413
+414
+415
+416
+417
+418
+419
+420
+421
+422
+423
+424
+425
+426
+427
+428
+429
+430
+431
+432
+433
+434
+435
+436
+437
+438
+439
+440
+441
+442
+443
+444
+445
+446
+447
+448
+449
+450
+451
+452
+453
+454
+455
+456
+457
+458
+459
+460
+461
+462
+463 async def getEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ entityUuid : str ,
+) -> dict :
+ """Get a single Entity by it's UUID for a project.
+
+ JSON response:
+ {
+ "uuid": "a54400b6-49fe-4787-9ab8-7e2f56ff52bc",
+ "createdAt": "2024-04-15T09:26:08.209Z",
+ "creatorId": 5,
+ "updatedAt": null,
+ "deletedAt": null,
+ "conflict": null,
+ "currentVersion": {
+ "createdAt": "2024-04-15T09:26:08.209Z",
+ "current": true,
+ "label": "test entity",
+ "creatorId": 5,
+ "userAgent": "Python/3.10 aiohttp/3.9.3",
+ "data": {
+ "osm_id": "1",
+ "geometry": "test"
+ },
+ "version": 1,
+ "baseVersion": null,
+ "dataReceived": {
+ "label": "test entity",
+ "osm_id": "1",
+ "geometry": "test"
+ },
+ "conflictingProperties": null
+ }
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (str): The name of a dataset, specific to a project.
+ entityUuid (str): Unique itentifier of the entity in the dataset.
+
+ Returns:
+ dict: the JSON entity details, for a specific dataset.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } " f "/entities/ { entityUuid } "
+ try :
+ async with self . session . get ( url , ssl = self . verify ) as response :
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ # NOTE skip raising exception on HTTP 404 (not found)
+ log . error ( f "Error fetching entity: { e } " )
+ return {}
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createEntity
+
+
+
+ async
+
+
+
+
createEntity ( projectId , datasetName , label , data )
+
+
+
+
+
Create a new Entity in a project dataset (entity list).
+
JSON request:
+{
+"uuid": "54a405a0-53ce-4748-9788-d23a30cc3afa",
+"label": "John Doe (88)",
+"data": {
+ "firstName": "John",
+ "age": "88"
+}
+}
+
JSON response:
+{
+"uuid": "d2e03bf8-cfc9-45c6-ab23-b8bc5b7d9aba",
+"createdAt": "2024-04-12T15:22:02.148Z",
+"creatorId": 5,
+"updatedAt": None,
+"deletedAt": None,
+"conflict": None,
+"currentVersion": {
+ "createdAt": "2024-04-12T15:22:02.148Z",
+ "current": True,
+ "label": "test entity 1",
+ "creatorId": 5,
+ "userAgent": "Python/3.10 aiohttp/3.9.3",
+ "data": {
+ "status": "READY",
+ "geometry": "test"
+ },
+ "version": 1,
+ "baseVersion": None,
+ "dataReceived": {
+ "label": "test entity 1",
+ "status": "READY",
+ "geometry": "test"
+ },
+ "conflictingProperties": None
+}
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ label
+
+ str
+
+
+
+
Label for the Entity.
+
+
+
+ required
+
+
+
+ data
+
+ dict
+
+
+
+
Key:Value pairs to insert as Entity data.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+ dict
+
+
+
+
JSON of entity details.
+The 'uuid' field includes the unique entity identifier.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 465
+466
+467
+468
+469
+470
+471
+472
+473
+474
+475
+476
+477
+478
+479
+480
+481
+482
+483
+484
+485
+486
+487
+488
+489
+490
+491
+492
+493
+494
+495
+496
+497
+498
+499
+500
+501
+502
+503
+504
+505
+506
+507
+508
+509
+510
+511
+512
+513
+514
+515
+516
+517
+518
+519
+520
+521
+522
+523
+524
+525
+526
+527
+528
+529
+530
+531
+532
+533
+534
+535
+536
+537
+538
+539
+540
+541
+542
+543
+544
+545
+546
+547 async def createEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ label : str ,
+ data : dict ,
+) -> dict :
+ """Create a new Entity in a project dataset (entity list).
+
+ JSON request:
+ {
+ "uuid": "54a405a0-53ce-4748-9788-d23a30cc3afa",
+ "label": "John Doe (88)",
+ "data": {
+ "firstName": "John",
+ "age": "88"
+ }
+ }
+
+ JSON response:
+ {
+ "uuid": "d2e03bf8-cfc9-45c6-ab23-b8bc5b7d9aba",
+ "createdAt": "2024-04-12T15:22:02.148Z",
+ "creatorId": 5,
+ "updatedAt": None,
+ "deletedAt": None,
+ "conflict": None,
+ "currentVersion": {
+ "createdAt": "2024-04-12T15:22:02.148Z",
+ "current": True,
+ "label": "test entity 1",
+ "creatorId": 5,
+ "userAgent": "Python/3.10 aiohttp/3.9.3",
+ "data": {
+ "status": "READY",
+ "geometry": "test"
+ },
+ "version": 1,
+ "baseVersion": None,
+ "dataReceived": {
+ "label": "test entity 1",
+ "status": "READY",
+ "geometry": "test"
+ },
+ "conflictingProperties": None
+ }
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ label (str): Label for the Entity.
+ data (dict): Key:Value pairs to insert as Entity data.
+
+ Returns:
+ dict: JSON of entity details.
+ The 'uuid' field includes the unique entity identifier.
+ """
+ # The CSV must contain a geometry field to work
+ # TODO also add this validation to uploadMedia if CSV format
+
+ required_fields = [ "geometry" ]
+ if not all ( key in data for key in required_fields ):
+ msg = "'geometry' data field is mandatory"
+ log . debug ( msg )
+ raise ValueError ( msg )
+
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities"
+ try :
+ async with self . session . post (
+ url ,
+ ssl = self . verify ,
+ json = {
+ "uuid" : str ( uuid4 ()),
+ "label" : label ,
+ "data" : data ,
+ },
+ ) as response :
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Failed to create Entity: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createEntities
+
+
+
+ async
+
+
+
+
createEntities ( projectId , datasetName , entities )
+
+
+
+
+
Bulk create Entities in a project dataset (entity list).
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ entities
+
+ list[EntityIn ]
+
+
+
+
A list of Entities to insert.
+Format: {"label": "John Doe", "data": {"firstName": "John", "age": "22"}}
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
list: A list of Entity detail JSONs.
+
+
+
+
+
+ dict
+
+
+
+
The 'uuid' field includes the unique entity identifier.
+
+
+
+
+dict
+ dict
+
+
+
+
{'success': true}
+When creating bulk entities ODK Central return this for now.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 549
+550
+551
+552
+553
+554
+555
+556
+557
+558
+559
+560
+561
+562
+563
+564
+565
+566
+567
+568
+569
+570
+571
+572
+573
+574
+575
+576
+577
+578
+579
+580
+581
+582
+583
+584
+585 async def createEntities (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ entities : list [ EntityIn ],
+) -> dict :
+ """Bulk create Entities in a project dataset (entity list).
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ entities (list[EntityIn]): A list of Entities to insert.
+ Format: {"label": "John Doe", "data": {"firstName": "John", "age": "22"}}
+
+ Returns:
+ # list: A list of Entity detail JSONs.
+ # The 'uuid' field includes the unique entity identifier.
+ dict: {'success': true}
+ When creating bulk entities ODK Central return this for now.
+ """
+ # Validation
+ if not isinstance ( entities , list ):
+ raise ValueError ( "Entities must be a list" )
+
+ log . info ( f "Bulk uploading ( { len ( entities ) } ) Entities for ODK project ( { projectId } ) dataset ( { datasetName } )" )
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities"
+ payload = { "entities" : entities , "source" : { "name" : "features.csv" }}
+
+ try :
+ async with self . session . post ( url , ssl = self . verify , json = payload ) as response :
+ response . raise_for_status ()
+ log . info ( f "Successfully created entities for ODK project ( { projectId } ) in dataset ( { datasetName } )" )
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Failed to create Entities: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ updateEntity
+
+
+
+ async
+
+
+
+
updateEntity ( projectId , datasetName , entityUuid , label = None , data = None , newVersion = None )
+
+
+
+
+
Update an existing Entity in a project dataset (entity list).
+
The JSON request format is the same as creating, minus the 'uuid' field.
+The PATCH will only update the specific fields specified, leaving the
+ remainder.
+
If no 'newVersion' param is provided, the entity will be force updated
+ in place.
+If 'newVersion' is provided, this must be a single integer increment
+ from the current version.
+
Example response:
+{
+"uuid": "71fff014-7518-429b-b97c-1332149efe7a",
+"createdAt": "2024-04-12T14:22:37.121Z",
+"creatorId": 5,
+"updatedAt": "2024-04-12T14:22:37.544Z",
+"deletedAt": None,
+"conflict": None,
+"currentVersion": {
+ "createdAt": "2024-04-12T14:22:37.544Z",
+ "current": True,
+ "label": "new label",
+ "creatorId": 5,
+ "userAgent": "Python/3.10 aiohttp/3.9.3",
+ "data": {
+ "osm_id": "1",
+ "status": "new status",
+ "geometry": "test",
+ "project_id": "100"
+ },
+ "version": 3,
+ "baseVersion": 2,
+ "dataReceived": {
+ "status": "new status",
+ "project_id": "100"
+ },
+ "conflictingProperties": None
+}
+}
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ entityUuid
+
+ str
+
+
+
+
Unique itentifier of the entity.
+
+
+
+ required
+
+
+
+ label
+
+ str
+
+
+
+
Label for the Entity.
+
+
+
+ None
+
+
+
+ data
+
+ dict
+
+
+
+
Key:Value pairs to insert as Entity data.
+
+
+
+ None
+
+
+
+ newVersion
+
+ int
+
+
+
+
Integer version to increment to (current version + 1).
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+dict
+ dict
+
+
+
+
JSON of entity details.
+The 'uuid' field includes the unique entity identifier.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 587
+588
+589
+590
+591
+592
+593
+594
+595
+596
+597
+598
+599
+600
+601
+602
+603
+604
+605
+606
+607
+608
+609
+610
+611
+612
+613
+614
+615
+616
+617
+618
+619
+620
+621
+622
+623
+624
+625
+626
+627
+628
+629
+630
+631
+632
+633
+634
+635
+636
+637
+638
+639
+640
+641
+642
+643
+644
+645
+646
+647
+648
+649
+650
+651
+652
+653
+654
+655
+656
+657
+658
+659
+660
+661
+662
+663
+664
+665
+666
+667
+668
+669
+670
+671
+672
+673
+674
+675
+676
+677
+678
+679
+680 async def updateEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ entityUuid : str ,
+ label : Optional [ str ] = None ,
+ data : Optional [ dict ] = None ,
+ newVersion : Optional [ int ] = None ,
+) -> dict :
+ """Update an existing Entity in a project dataset (entity list).
+
+ The JSON request format is the same as creating, minus the 'uuid' field.
+ The PATCH will only update the specific fields specified, leaving the
+ remainder.
+
+ If no 'newVersion' param is provided, the entity will be force updated
+ in place.
+ If 'newVersion' is provided, this must be a single integer increment
+ from the current version.
+
+ Example response:
+ {
+ "uuid": "71fff014-7518-429b-b97c-1332149efe7a",
+ "createdAt": "2024-04-12T14:22:37.121Z",
+ "creatorId": 5,
+ "updatedAt": "2024-04-12T14:22:37.544Z",
+ "deletedAt": None,
+ "conflict": None,
+ "currentVersion": {
+ "createdAt": "2024-04-12T14:22:37.544Z",
+ "current": True,
+ "label": "new label",
+ "creatorId": 5,
+ "userAgent": "Python/3.10 aiohttp/3.9.3",
+ "data": {
+ "osm_id": "1",
+ "status": "new status",
+ "geometry": "test",
+ "project_id": "100"
+ },
+ "version": 3,
+ "baseVersion": 2,
+ "dataReceived": {
+ "status": "new status",
+ "project_id": "100"
+ },
+ "conflictingProperties": None
+ }
+ }
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ entityUuid (str): Unique itentifier of the entity.
+ label (str): Label for the Entity.
+ data (dict): Key:Value pairs to insert as Entity data.
+ newVersion (int): Integer version to increment to (current version + 1).
+
+ Returns:
+ dict: JSON of entity details.
+ The 'uuid' field includes the unique entity identifier.
+ """
+ if not label and not data :
+ msg = "One of either the 'label' or 'data' fields must be passed"
+ log . debug ( msg )
+ raise ValueError ( msg )
+
+ json_data = {}
+ if data :
+ json_data [ "data" ] = data
+ if label :
+ json_data [ "label" ] = label
+
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities/ { entityUuid } "
+ if newVersion :
+ url = f " { url } ?baseVersion= { newVersion - 1 } "
+ else :
+ url = f " { url } ?force=true"
+
+ try :
+ log . info (
+ f "Updating Entity ( { entityUuid } ) for ODK project ( { projectId } ) "
+ f "with params: label= { label } data= { data } newVersion= { newVersion } "
+ )
+ async with self . session . patch (
+ url ,
+ ssl = self . verify ,
+ json = json_data ,
+ ) as response :
+ return await response . json ()
+ except aiohttp . ClientError as e :
+ msg = f "Failed to update Entity: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ deleteEntity
+
+
+
+ async
+
+
+
+
deleteEntity ( projectId , datasetName , entityUuid )
+
+
+
+
+
Delete an Entity in a project dataset (entity list).
+
Only performs a soft deletion, so the Entity is actually archived.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ entityUuid
+
+ str
+
+
+
+
Unique itentifier of the entity.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+bool
+ bool
+
+
+
+
Deletion successful or not.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 682
+683
+684
+685
+686
+687
+688
+689
+690
+691
+692
+693
+694
+695
+696
+697
+698
+699
+700
+701
+702
+703
+704
+705
+706
+707
+708
+709
+710
+711
+712 async def deleteEntity (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ entityUuid : str ,
+) -> bool :
+ """Delete an Entity in a project dataset (entity list).
+
+ Only performs a soft deletion, so the Entity is actually archived.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ entityUuid (str): Unique itentifier of the entity.
+
+ Returns:
+ bool: Deletion successful or not.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } /entities/ { entityUuid } "
+ log . debug ( f "Deleting dataset ( { datasetName } ) entity UUID ( { entityUuid } )" )
+ try :
+ log . info ( f "Deleting Entity ( { entityUuid } ) for ODK project ( { projectId } ) " f "and dataset ( { datasetName } )" )
+ async with self . session . delete ( url , ssl = self . verify ) as response :
+ success = ( response_msg := await response . json ()) . get ( "success" , False )
+ if not success :
+ log . debug ( f "Server returned deletion unsuccessful: { response_msg } " )
+ return success
+ except aiohttp . ClientError as e :
+ msg = f "Failed to delete Entity: { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getEntityCount
+
+
+
+ async
+
+
+
+
getEntityCount ( projectId , datasetName )
+
+
+
+
+
Get only the count of the Entity entries.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+int
+ int
+
+
+
+
All entity data for a project dataset.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 714
+715
+716
+717
+718
+719
+720
+721
+722
+723
+724
+725
+726
+727
+728
+729
+730
+731
+732
+733
+734
+735
+736
+737
+738
+739
+740
+741
+742 async def getEntityCount (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+) -> int :
+ """Get only the count of the Entity entries.
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+
+ Returns:
+ int: All entity data for a project dataset.
+ """
+ # NOTE returns no entity data (value: []), only the count
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } .svc/Entities?%24top=0&%24count=true"
+ try :
+ async with self . session . get ( url , ssl = self . verify ) as response :
+ count = ( await response . json ()) . get ( "@odata.count" , None )
+ except aiohttp . ClientError as e :
+ msg = f "Failed to get Entity count for ODK project ( { projectId } ): { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+ if count is None :
+ log . debug ( f "Project ( { projectId } ) has no Entities in dataset ( { datasetName } )" )
+ return 0
+
+ return count
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getEntityData
+
+
+
+ async
+
+
+
+
getEntityData ( projectId , datasetName , url_params = None , include_metadata = False )
+
+
+
+
+
Get a lightweight JSON of the entity data fields in a dataset.
+
Be sure to check the latest docs to see which fields are supported for
+OData filtering:
+https://docs.getodk.org/central-api-odata-endpoints/#id3
+
Example response list (include_metadata=False):
+[
+ {
+ "__id": "523699d0-66ec-4cfc-a76b-4617c01c6b92",
+ "label": "the_label_you_defined",
+ "__system": {
+ "createdAt": "2024-03-24T06:30:31.219Z",
+ "creatorId": "7",
+ "creatorName": "fmtm@hotosm.org",
+ "updates": 4,
+ "updatedAt": "2024-03-24T07:12:55.871Z",
+ "version": 5,
+ "conflict": null
+ },
+ "geometry": "javarosa format geometry",
+ "user_defined_field2": "text",
+ "user_defined_field2": "text",
+ "user_defined_field3": "test"
+ }
+]
+
Example response JSON where:
+- url_params="$top=205&$count=true"
+- include_metadata=True automatically due to use of $top param
+
{
+"value": [
+ {
+ "__id": "523699d0-66ec-4cfc-a76b-4617c01c6b92",
+ "label": "the_label_you_defined",
+ "__system": {
+ "createdAt": "2024-03-24T06:30:31.219Z",
+ "creatorId": "7",
+ "creatorName": "fmtm@hotosm.org",
+ "updates": 4,
+ "updatedAt": "2024-03-24T07:12:55.871Z",
+ "version": 5,
+ "conflict": null
+ },
+ "geometry": "javarosa format geometry",
+ "user_defined_field2": "text",
+ "user_defined_field2": "text",
+ "user_defined_field3": "test"
+ }
+]
+"@odata.context": (
+ "https://URL/v1/projects/6/datasets/buildings.svc/$metadata#Entities",
+)
+"@odata.nextLink": (
+ "https://URL/v1/projects/6/datasets/buildings.svc/Entities"
+ "?%24top=250&%24count=true&%24skiptoken=returnedtokenhere%3D"
+"@odata.count": 667
+}
+
Info on OData URL params:
+http://docs.oasis-open.org
+/odata/odata/v4.01/odata-v4.01-part1-protocol.html#_Toc31358948
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ projectId
+
+ int
+
+
+
+
The ID of the project on ODK Central.
+
+
+
+ required
+
+
+
+ datasetName
+
+ int
+
+
+
+
The name of a dataset, specific to a project.
+
+
+
+ required
+
+
+
+ url_params
+
+ str
+
+
+
+
Any supported OData URL params, such as 'filter'
+or 'select'. The ? is not required.
+
+
+
+ None
+
+
+
+ include_metadata
+
+ bool
+
+
+
+
Include additional metadata.
+If true, returns a dict, if false, returns a list of Entities.
+If $top is included in url_params, this is enabled by default to get
+the "@odata.nextLink" field.
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict | list
+
+
+
+
list | dict: All (or filtered) entity data for a project dataset.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/OdkCentralAsync.py
+ 744
+745
+746
+747
+748
+749
+750
+751
+752
+753
+754
+755
+756
+757
+758
+759
+760
+761
+762
+763
+764
+765
+766
+767
+768
+769
+770
+771
+772
+773
+774
+775
+776
+777
+778
+779
+780
+781
+782
+783
+784
+785
+786
+787
+788
+789
+790
+791
+792
+793
+794
+795
+796
+797
+798
+799
+800
+801
+802
+803
+804
+805
+806
+807
+808
+809
+810
+811
+812
+813
+814
+815
+816
+817
+818
+819
+820
+821
+822
+823
+824
+825
+826
+827
+828
+829
+830
+831
+832
+833
+834
+835
+836
+837
+838
+839
+840
+841
+842
+843
+844 async def getEntityData (
+ self ,
+ projectId : int ,
+ datasetName : str ,
+ url_params : Optional [ str ] = None ,
+ include_metadata : Optional [ bool ] = False ,
+) -> dict | list :
+ """Get a lightweight JSON of the entity data fields in a dataset.
+
+ Be sure to check the latest docs to see which fields are supported for
+ OData filtering:
+ https://docs.getodk.org/central-api-odata-endpoints/#id3
+
+ Example response list (include_metadata=False):
+ [
+ {
+ "__id": "523699d0-66ec-4cfc-a76b-4617c01c6b92",
+ "label": "the_label_you_defined",
+ "__system": {
+ "createdAt": "2024-03-24T06:30:31.219Z",
+ "creatorId": "7",
+ "creatorName": "fmtm@hotosm.org",
+ "updates": 4,
+ "updatedAt": "2024-03-24T07:12:55.871Z",
+ "version": 5,
+ "conflict": null
+ },
+ "geometry": "javarosa format geometry",
+ "user_defined_field2": "text",
+ "user_defined_field2": "text",
+ "user_defined_field3": "test"
+ }
+ ]
+
+ Example response JSON where:
+ - url_params="$top=205&$count=true"
+ - include_metadata=True automatically due to use of $top param
+
+ {
+ "value": [
+ {
+ "__id": "523699d0-66ec-4cfc-a76b-4617c01c6b92",
+ "label": "the_label_you_defined",
+ "__system": {
+ "createdAt": "2024-03-24T06:30:31.219Z",
+ "creatorId": "7",
+ "creatorName": "fmtm@hotosm.org",
+ "updates": 4,
+ "updatedAt": "2024-03-24T07:12:55.871Z",
+ "version": 5,
+ "conflict": null
+ },
+ "geometry": "javarosa format geometry",
+ "user_defined_field2": "text",
+ "user_defined_field2": "text",
+ "user_defined_field3": "test"
+ }
+ ]
+ "@odata.context": (
+ "https://URL/v1/projects/6/datasets/buildings.svc/$metadata#Entities",
+ )
+ "@odata.nextLink": (
+ "https://URL/v1/projects/6/datasets/buildings.svc/Entities"
+ "?%24top=250&%24count=true&%24skiptoken=returnedtokenhere%3D"
+ "@odata.count": 667
+ }
+
+ Info on OData URL params:
+ http://docs.oasis-open.org
+ /odata/odata/v4.01/odata-v4.01-part1-protocol.html#_Toc31358948
+
+ Args:
+ projectId (int): The ID of the project on ODK Central.
+ datasetName (int): The name of a dataset, specific to a project.
+ url_params (str): Any supported OData URL params, such as 'filter'
+ or 'select'. The ? is not required.
+ include_metadata (bool): Include additional metadata.
+ If true, returns a dict, if false, returns a list of Entities.
+ If $top is included in url_params, this is enabled by default to get
+ the "@odata.nextLink" field.
+
+ Returns:
+ list | dict: All (or filtered) entity data for a project dataset.
+ """
+ url = f " { self . base } projects/ { projectId } /datasets/ { datasetName } .svc/Entities"
+ if url_params :
+ url += f "? { url_params } "
+ if "$top" in url_params :
+ # Force enable metadata, as required for pagination
+ include_metadata = True
+
+ try :
+ async with self . session . get ( url , ssl = self . verify ) as response :
+ response_json = await response . json ()
+ if not include_metadata :
+ return response_json . get ( "value" , [])
+ return response_json
+ except aiohttp . ClientError as e :
+ msg = f "Failed to get Entity data for ODK project ( { projectId } ): { e } "
+ log . error ( msg )
+ raise aiohttp . ClientError ( msg ) from e
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+Usage Example
+
+An async context manager must be used (async with
).
+
+from osm_fieldwork.OdkCentralAsync import OdkProject
+
+async with OdkProject (
+ url = "http://server.com" ,
+ user = "user@domain.com" ,
+ passwd = "password" ,
+) as odk_central :
+ projects = await odk_central . listProjects ()
+
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/basemapper/index.html b/api/basemapper/index.html
new file mode 100644
index 000000000..4a71668ab
--- /dev/null
+++ b/api/basemapper/index.html
@@ -0,0 +1,2978 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ basemapper - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+basemapper.py
+
+
+
+
+
+
+
+
+
+
+
Thread to handle downloads for Queue.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ dest
+
+ str
+
+
+
+
The filespec of the tile cache.
+
+
+
+ required
+
+
+
+ mirrors
+
+ list
+
+
+
+
The list of mirrors to get imagery.
+
+
+
+ required
+
+
+
+ tiles
+
+ list
+
+
+
+
The list of tiles to download.
+
+
+
+ required
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251 def dlthread ( dest : str , mirrors : list [ dict ], tiles : list [ tuple ]) -> None :
+ """Thread to handle downloads for Queue.
+
+ Args:
+ dest (str): The filespec of the tile cache.
+ mirrors (list): The list of mirrors to get imagery.
+ tiles (list): The list of tiles to download.
+ """
+ if len ( tiles ) == 0 :
+ # epdb.st()
+ return
+
+ # Create the subdirectories as pySmartDL doesn't do it for us
+ Path ( dest ) . mkdir ( parents = True , exist_ok = True )
+
+ log . info ( f "Downloading { len ( tiles ) } tiles in thread { threading . get_ident () } to { dest } " )
+
+ with concurrent . futures . ThreadPoolExecutor ( max_workers = 4 ) as executor :
+ futures = [ executor . submit ( download_tile , dest , tile , mirrors ) for tile in tiles ]
+ concurrent . futures . wait ( futures )
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
Basemapper parent class.
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ boundary
+
+ Union [str, BytesIO ]
+
+
+
+
A BBOX string or GeoJSON provided as BytesIO object of the AOI.
+The GeoJSON can contain multiple geometries.
+
+
+
+ required
+
+
+
+ base
+
+ str
+
+
+
+
The base directory to cache map tile in
+
+
+
+ required
+
+
+
+ source
+
+ str
+
+
+
+
The upstream data source for map tiles
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ BaseMapper
+
+
+
+
An instance of this class
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 257
+258
+259
+260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293 def __init__ (
+ self ,
+ boundary : Union [ str , BytesIO ],
+ base : str ,
+ source : str ,
+):
+ """Create an tile basemap for ODK Collect.
+
+ Args:
+ boundary (Union[str, BytesIO]): A BBOX string or GeoJSON provided as BytesIO object of the AOI.
+ The GeoJSON can contain multiple geometries.
+ base (str): The base directory to cache map tile in
+ source (str): The upstream data source for map tiles
+
+ Returns:
+ (BaseMapper): An instance of this class
+ """
+ bbox_factory = BoundaryHandlerFactory ( boundary )
+ self . bbox = bbox_factory . get_bounding_box ()
+ self . tiles = list ()
+ self . base = base
+ # sources for imagery
+ self . source = source
+ self . sources = dict ()
+
+ path = xlsforms_path . replace ( "xlsforms" , "imagery.yaml" )
+ self . yaml = YamlFile ( path )
+
+ for entry in self . yaml . yaml [ "sources" ]:
+ for k , v in entry . items ():
+ src = dict ()
+ for item in v :
+ src [ "source" ] = k
+ for k1 , v1 in item . items ():
+ # print(f"\tFIXME2: {k1} - {v1}")
+ src [ k1 ] = v1
+ self . sources [ k ] = src
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ customTMS
+
+
+
+
customTMS ( url , is_oam = False , is_xy = False )
+
+
+
+
+
Add a custom TMS URL to the list of sources.
+
The url must end in %s to be replaced with the tile xyz values.
+
Format examples:
+https://basemap.nationalmap.gov/ArcGIS/rest/services/USGSTopo/MapServer/tile/{z}/{y}/{x}
+https://maps.nyc.gov/xyz/1.0.0/carto/basemap/%s
+https://maps.nyc.gov/xyz/1.0.0/carto/basemap/{z}/{x}/{y}.jpg
+
The method will replace {z}/{x}/{y}.jpg with %s
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ url
+
+ str
+
+
+
+
+
+ required
+
+
+
+ source
+
+ str
+
+
+
+
The provier source, for setting attribution
+
+
+
+ required
+
+
+
+ is_xy
+
+ bool
+
+
+
+
Swap the x and y for the provider --> 'zxy'
+
+
+
+ False
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 295
+296
+297
+298
+299
+300
+301
+302
+303
+304
+305
+306
+307
+308
+309
+310
+311
+312
+313
+314
+315
+316
+317
+318
+319
+320
+321
+322
+323
+324
+325
+326
+327
+328
+329
+330
+331
+332
+333
+334
+335
+336
+337
+338
+339
+340
+341
+342
+343
+344
+345
+346
+347
+348 def customTMS ( self , url : str , is_oam : bool = False , is_xy : bool = False ):
+ """Add a custom TMS URL to the list of sources.
+
+ The url must end in %s to be replaced with the tile xyz values.
+
+ Format examples:
+ https://basemap.nationalmap.gov/ArcGIS/rest/services/USGSTopo/MapServer/tile/{z}/{y}/{x}
+ https://maps.nyc.gov/xyz/1.0.0/carto/basemap/%s
+ https://maps.nyc.gov/xyz/1.0.0/carto/basemap/{z}/{x}/{y}.jpg
+
+ The method will replace {z}/{x}/{y}.jpg with %s
+
+ Args:
+ url (str): The URL string
+ source (str): The provier source, for setting attribution
+ is_xy (bool): Swap the x and y for the provider --> 'zxy'
+ """
+ # Remove any file extensions if present and update the 'suffix' parameter
+ # NOTE the file extension gets added again later for the download URL
+ if url . endswith ( ".jpg" ):
+ suffix = "jpg"
+ url = url [: - 4 ] # Remove the last 4 characters (".jpg")
+ elif url . endswith ( ".png" ):
+ suffix = "png"
+ url = url [: - 4 ] # Remove the last 4 characters (".png")
+ else :
+ # FIXME handle other formats for custom TMS
+ suffix = "jpg"
+
+ # If placeholders present, validate they have no additional spaces
+ if "{" in url and "}" in url :
+ pattern = r ".*/\{[zxy]\}/\{[zxy]\}/\{[zxy]\}(?:/|/?)"
+ if not bool ( re . search ( pattern , url )):
+ msg = "Invalid TMS URL format. Please check the URL placeholders {z} / {x} / {y} ."
+ log . error ( msg )
+ raise ValueError ( msg )
+
+ # Remove "{z}/{x}/{y}" placeholders if they are present
+ url = re . sub ( r "/{[xyz]+\}" , "" , url )
+ # Append "%s" to the end of the URL to later add the tile path
+ url = url + r "/ %s "
+
+ if is_oam :
+ # Override dummy OAM URL
+ source = "oam"
+ self . sources [ source ][ "url" ] = url
+ else :
+ source = "custom"
+ tms_params = { "name" : source , "url" : url , "suffix" : suffix , "source" : source , "xy" : is_xy }
+ log . debug ( f "Setting custom TMS with params: { tms_params } " )
+ self . sources [ source ] = tms_params
+
+ # Select the source
+ self . source = source
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Get the image format of the map tiles.
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
the upstream source for map tiles.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 350
+351
+352
+353
+354
+355
+356 def getFormat ( self ):
+ """Get the image format of the map tiles.
+
+ Returns:
+ (str): the upstream source for map tiles.
+ """
+ return self . sources [ self . source ][ "suffix" ]
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getTiles
+
+
+
+
+
+
+
+
Get a list of tiles for the specified zoom level.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ zoom
+
+ int
+
+
+
+
The Zoom level of the desired map tiles.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+int
+ int
+
+
+
+
The total number of map tiles downloaded.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 358
+359
+360
+361
+362
+363
+364
+365
+366
+367
+368
+369
+370
+371
+372
+373
+374
+375
+376
+377
+378
+379
+380
+381
+382
+383
+384
+385
+386
+387
+388 def getTiles ( self , zoom : int ) -> int :
+ """Get a list of tiles for the specified zoom level.
+
+ Args:
+ zoom (int): The Zoom level of the desired map tiles.
+
+ Returns:
+ int: The total number of map tiles downloaded.
+ """
+ info = get_cpu_info ()
+ cores = info [ "count" ]
+
+ self . tiles = list ( mercantile . tiles ( self . bbox [ 0 ], self . bbox [ 1 ], self . bbox [ 2 ], self . bbox [ 3 ], zoom ))
+ total = len ( self . tiles )
+ log . info ( f " { total } tiles for zoom level { zoom } " )
+
+ mirrors = [ self . sources [ self . source ]]
+ chunk_size = max ( 1 , round ( total / cores ))
+
+ if total < chunk_size or chunk_size == 0 :
+ dlthread ( self . base , mirrors , self . tiles )
+ else :
+ with concurrent . futures . ThreadPoolExecutor ( max_workers = cores ) as executor :
+ futures = []
+ for i in range ( 0 , total , chunk_size ):
+ chunk = self . tiles [ i : i + chunk_size ]
+ futures . append ( executor . submit ( dlthread , self . base , mirrors , chunk ))
+ log . debug ( f "Dispatching Block { i } : { i + chunk_size } " )
+ concurrent . futures . wait ( futures )
+
+ return total
+
+
+
+
+
+
+
+
+
+
+
+
+
+ tileExists
+
+
+
+
+
+
+
+
See if a map tile already exists.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ tile
+
+ MapTile
+
+
+
+
The map tile to check for the existence of
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Whether the tile exists in the map tile cache
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 390
+391
+392
+393
+394
+395
+396
+397
+398
+399
+400
+401
+402
+403
+404
+405
+406
+407
+408 def tileExists (
+ self ,
+ tile : MapTile ,
+):
+ """See if a map tile already exists.
+
+ Args:
+ tile (MapTile): The map tile to check for the existence of
+
+ Returns:
+ (bool): Whether the tile exists in the map tile cache
+ """
+ filespec = f " { self . base }{ tile [ 2 ] } / { tile [ 1 ] } / { tile [ 0 ] } . { self . sources [{ self . source }][ 'suffix' ] } "
+ if Path ( filespec ) . exists ():
+ log . debug ( " %s exists" % filespec )
+ return True
+ else :
+ log . debug ( " %s doesn't exists" % filespec )
+ return False
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+
+
Create a basemap with given parameters.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ boundary
+
+ str | BytesIO
+
+
+
+
The boundary for the area you want.
+
+
+
+ None
+
+
+
+ tms
+
+ str
+
+
+
+
+
+ None
+
+
+
+ xy
+
+ bool
+
+
+
+
Swap the X & Y coordinates when using a
+custom TMS if True.
+
+
+
+ False
+
+
+
+ outfile
+
+ str
+
+
+
+
Output file name for the basemap.
+
+
+
+ None
+
+
+
+ zooms
+
+ str
+
+
+
+
The Zoom levels, specified as a range
+(e.g., "12-17") or comma-separated levels (e.g., "12,13,14").
+
+
+
+ '12-17'
+
+
+
+ outdir
+
+ str
+
+
+
+
Output directory name for tile cache.
+
+
+
+ None
+
+
+
+ source
+
+ str
+
+
+
+
Imagery source, one of
+["esri", "bing", "topo", "google", "oam", "custom"] (default is "esri").
+
+
+
+ 'esri'
+
+
+
+ append
+
+ bool
+
+
+
+
Whether to append to an existing file
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ None
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 512
+513
+514
+515
+516
+517
+518
+519
+520
+521
+522
+523
+524
+525
+526
+527
+528
+529
+530
+531
+532
+533
+534
+535
+536
+537
+538
+539
+540
+541
+542
+543
+544
+545
+546
+547
+548
+549
+550
+551
+552
+553
+554
+555
+556
+557
+558
+559
+560
+561
+562
+563
+564
+565
+566
+567
+568
+569
+570
+571
+572
+573
+574
+575
+576
+577
+578
+579
+580
+581
+582
+583
+584
+585
+586
+587
+588
+589
+590
+591
+592
+593
+594
+595
+596
+597
+598
+599
+600
+601
+602
+603
+604
+605
+606
+607
+608
+609
+610
+611
+612
+613
+614
+615
+616
+617
+618
+619
+620
+621
+622
+623
+624
+625
+626
+627
+628
+629
+630
+631
+632 def create_basemap_file (
+ boundary = None ,
+ tms = None ,
+ xy = False ,
+ outfile = None ,
+ zooms = "12-17" ,
+ outdir = None ,
+ source = "esri" ,
+ append : bool = False ,
+) -> None :
+ """Create a basemap with given parameters.
+
+ Args:
+ boundary (str | BytesIO, optional): The boundary for the area you want.
+ tms (str, optional): Custom TMS URL.
+ xy (bool, optional): Swap the X & Y coordinates when using a
+ custom TMS if True.
+ outfile (str, optional): Output file name for the basemap.
+ zooms (str, optional): The Zoom levels, specified as a range
+ (e.g., "12-17") or comma-separated levels (e.g., "12,13,14").
+ outdir (str, optional): Output directory name for tile cache.
+ source (str, optional): Imagery source, one of
+ ["esri", "bing", "topo", "google", "oam", "custom"] (default is "esri").
+ append (bool, optional): Whether to append to an existing file
+
+ Returns:
+ None
+ """
+ log . debug (
+ "Creating basemap with params: "
+ f "boundary= { boundary } | "
+ f "outfile= { outfile } | "
+ f "zooms= { zooms } | "
+ f "outdir= { outdir } | "
+ f "source= { source } | "
+ f "tms= { tms } "
+ )
+
+ # Validation
+ if not boundary :
+ err = "You need to specify a boundary! (in-memory object or bbox)"
+ log . error ( err )
+ raise ValueError ( err )
+
+ # Get all the zoom levels we want
+ zoom_levels = list ()
+ if zooms :
+ if zooms . find ( "-" ) > 0 :
+ start = int ( zooms . split ( "-" )[ 0 ])
+ end = int ( zooms . split ( "-" )[ 1 ]) + 1
+ x = range ( start , end )
+ for i in x :
+ zoom_levels . append ( i )
+ elif zooms . find ( "," ) > 0 :
+ levels = zooms . split ( "," )
+ for level in levels :
+ zoom_levels . append ( int ( level ))
+ else :
+ zoom_levels . append ( int ( zooms ))
+
+ if not outdir :
+ base = Path . cwd () . absolute ()
+ else :
+ base = Path ( outdir ) . absolute ()
+
+ # Source / TMS validation
+ if not source and not tms :
+ err = "You need to specify a source!"
+ log . error ( err )
+ raise ValueError ( err )
+ if source == "oam" and not tms :
+ err = "A TMS URL must be provided for OpenAerialMap!"
+ log . error ( err )
+ raise ValueError ( err )
+ # A custom TMS provider
+ if source != "oam" and tms :
+ source = "custom"
+
+ tiledir = base / f " { source } tiles"
+ # Make tile download directory
+ tiledir . mkdir ( parents = True , exist_ok = True )
+ # Convert to string for other methods
+ tiledir = str ( tiledir )
+
+ basemap = BaseMapper ( boundary , tiledir , source )
+
+ if tms :
+ # Add TMS URL to sources for download
+ basemap . customTMS ( tms , True if source == "oam" else False , xy )
+
+ # Args parsed, main code:
+ tiles = list ()
+ for zoom_level in zoom_levels :
+ # Download the tile directory
+ basemap . getTiles ( zoom_level )
+ tiles += basemap . tiles
+
+ if not outfile :
+ log . info ( f "No outfile specified, tile download finished: { tiledir } " )
+ return
+
+ suffix = Path ( outfile ) . suffix . lower ()
+ image_format = basemap . sources [ source ] . get ( "suffix" , "jpg" )
+ log . debug ( f "Basemap output format: { suffix } | Image format: { image_format } " )
+
+ if any ( substring in suffix for substring in [ "sqlite" , "mbtiles" ]):
+ outf = DataFile ( outfile , basemap . getFormat (), append )
+ if suffix == ".mbtiles" :
+ outf . addBounds ( basemap . bbox )
+ outf . addZoomLevels ( zoom_levels )
+ # Create output database and specify image format, png, jpg, or tif
+ outf . writeTiles ( tiles , tiledir , image_format )
+
+ elif suffix == ".pmtiles" :
+ tile_dir_to_pmtiles ( outfile , tiledir , basemap . bbox , image_format , zoom_levels , source )
+
+ else :
+ msg = f "Format { suffix } not supported"
+ log . error ( msg )
+ raise ValueError ( msg ) from None
+ log . info ( f "Wrote { outfile } " )
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+
+
Write PMTiles archive from tiles in the specified directory.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ outfile
+
+ str
+
+
+
+
The output PMTiles archive file path.
+
+
+
+ required
+
+
+
+ tile_dir
+
+ str | Path
+
+
+
+
The directory containing the tile images.
+
+
+
+ required
+
+
+
+ bbox
+
+ tuple
+
+
+
+
Bounding box in format (min_lon, min_lat, max_lon, max_lat).
+
+
+
+ required
+
+
+
+ attribution
+
+ str
+
+
+
+
Attribution string to include in PMTile archive.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 440
+441
+442
+443
+444
+445
+446
+447
+448
+449
+450
+451
+452
+453
+454
+455
+456
+457
+458
+459
+460
+461
+462
+463
+464
+465
+466
+467
+468
+469
+470
+471
+472
+473
+474
+475
+476
+477
+478
+479
+480
+481
+482
+483
+484
+485
+486
+487
+488
+489
+490
+491
+492
+493
+494
+495
+496
+497
+498
+499
+500
+501
+502
+503
+504
+505
+506
+507
+508
+509 def tile_dir_to_pmtiles (
+ outfile : str ,
+ tile_dir : str | Path ,
+ bbox : tuple ,
+ image_format : str ,
+ zoom_levels : list [ int ],
+ attribution : str ,
+):
+ """Write PMTiles archive from tiles in the specified directory.
+
+ Args:
+ outfile (str): The output PMTiles archive file path.
+ tile_dir (str | Path): The directory containing the tile images.
+ bbox (tuple): Bounding box in format (min_lon, min_lat, max_lon, max_lat).
+ attribution (str): Attribution string to include in PMTile archive.
+
+ Returns:
+ None
+ """
+ tile_dir = Path ( tile_dir )
+
+ # Abort if no files are present
+ first_file = next (( file for file in tile_dir . rglob ( "*.*" ) if file . is_file ()), None )
+ if not first_file :
+ err = "No tile files found in the specified directory. Aborting PMTile creation."
+ log . error ( err )
+ raise ValueError ( err )
+
+ tile_format = image_format . upper ()
+ # NOTE JPEG exception / flexible extension (.jpg, .jpeg)
+ if tile_format == "JPG" :
+ tile_format = "JPEG"
+ log . debug ( f "PMTile determind internal file format: { tile_format } " )
+ possible_tile_formats = [ f ". { e . name . lower () } " for e in PMTileType ]
+ possible_tile_formats . append ( ".jpg" )
+ possible_tile_formats . remove ( ".unknown" )
+
+ with open ( outfile , "wb" ) as pmtile_file :
+ writer = PMTileWriter ( pmtile_file )
+
+ for tile_path in tile_dir . rglob ( "*" ):
+ if tile_path . is_file () and tile_path . suffix . lower () in possible_tile_formats :
+ tile_id = tileid_from_zyx_dir_path ( tile_path )
+
+ with open ( tile_path , "rb" ) as tile :
+ writer . write_tile ( tile_id , tile . read ())
+
+ min_lon , min_lat , max_lon , max_lat = bbox
+ log . debug (
+ f "Writing PMTiles file with min_zoom ( { zoom_levels [ 0 ] } ) "
+ f "max_zoom ( { zoom_levels [ - 1 ] } ) bbox ( { bbox } ) tile_compression None"
+ )
+
+ # Write PMTile metadata
+ writer . finalize (
+ header = {
+ "tile_type" : PMTileType [ tile_format ],
+ "tile_compression" : PMTileCompression . NONE ,
+ "min_zoom" : zoom_levels [ 0 ],
+ "max_zoom" : zoom_levels [ - 1 ],
+ "min_lon_e7" : int ( min_lon * 10000000 ),
+ "min_lat_e7" : int ( min_lat * 10000000 ),
+ "max_lon_e7" : int ( max_lon * 10000000 ),
+ "max_lat_e7" : int ( max_lat * 10000000 ),
+ "center_zoom" : zoom_levels [ 0 ],
+ "center_lon_e7" : int ( min_lon + (( max_lon - min_lon ) / 2 )),
+ "center_lat_e7" : int ( min_lat + (( max_lat - min_lat ) / 2 )),
+ },
+ metadata = { "attribution" : f "© { attribution } " },
+ )
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+
+
Helper function to get the tile id from a tile in xyz (zyx) directory structure.
+
TMS typically has structure z/y/x.png
+If the --xy flag was used previously, the TMS was downloaed into
+directories of z/y/x structure from their z/x/y URL.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filepath
+
+ Union [Path , str]
+
+
+
+
The path to tile image within the xyz directory.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+int
+ int
+
+
+
+
The globally defined tile id from the xyz definition.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/basemapper.py
+ 411
+412
+413
+414
+415
+416
+417
+418
+419
+420
+421
+422
+423
+424
+425
+426
+427
+428
+429
+430
+431
+432
+433
+434
+435
+436
+437 def tileid_from_zyx_dir_path ( filepath : Union [ Path , str ]) -> int :
+ """Helper function to get the tile id from a tile in xyz (zyx) directory structure.
+
+ TMS typically has structure z/y/x.png
+ If the --xy flag was used previously, the TMS was downloaed into
+ directories of z/y/x structure from their z/x/y URL.
+
+ Args:
+ filepath (Union[Path, str]): The path to tile image within the xyz directory.
+
+ Returns:
+ int: The globally defined tile id from the xyz definition.
+ """
+ # Extract the final 3 parts from the TMS file path
+ tile_image_path = Path ( filepath ) . parts [ - 3 :]
+
+ try :
+ final_tile = int ( Path ( tile_image_path [ - 1 ]) . stem )
+ except ValueError as e :
+ msg = f "Invalid tile path (cannot parse as int): { str ( tile_image_path ) } "
+ log . error ( msg )
+ raise ValueError ( msg ) from e
+
+ x = final_tile
+ z , y = map ( int , tile_image_path [: - 1 ])
+
+ return zxy_to_tileid ( z , x , y )
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/convert/index.html b/api/convert/index.html
new file mode 100644
index 000000000..88fc4e317
--- /dev/null
+++ b/api/convert/index.html
@@ -0,0 +1,3135 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convert - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+convert.py
+
+
+
+
+
+
+
+
+
+ Bases: YamlFile
+
+
+
A class to apply a YAML config file and convert ODK to OSM.
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ Convert
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93
+94
+95
+96
+97 def __init__ (
+ self ,
+ xform : str = None ,
+):
+ path = xlsforms_path . replace ( "xlsforms" , "" )
+ if xform is not None :
+ file = xform
+ else :
+ file = f " { path } /xforms.yaml"
+ self . yaml = YamlFile ( file )
+ self . filespec = file
+ # Parse the file contents into a data structure to make it
+ # easier to retrieve values
+ self . convert = dict ()
+ self . ignore = list ()
+ self . private = list ()
+ self . defaults = dict ()
+ self . entries = dict ()
+ self . types = dict ()
+ self . saved = dict ()
+ for item in self . yaml . yaml [ "convert" ]:
+ key = list ( item . keys ())[ 0 ]
+ value = item [ key ]
+ # print("ZZZZ: %r, %r" % (key, value))
+ if type ( value ) is str :
+ self . convert [ key ] = value
+ elif type ( value ) is list :
+ vals = dict ()
+ for entry in value :
+ if type ( entry ) is str :
+ # epdb.st()
+ tag = entry
+ else :
+ tag = list ( entry . keys ())[ 0 ]
+ vals [ tag ] = entry [ tag ]
+ self . convert [ key ] = vals
+ self . ignore = self . yaml . yaml [ "ignore" ]
+ self . private = self . yaml . yaml [ "private" ]
+ if "multiple" in self . yaml . yaml :
+ self . multiple = self . yaml . yaml [ "multiple" ]
+ else :
+ self . multiple = list ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ privateData
+
+
+
+
+
+
+
+
Search the private data category for a keyword.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
=If the keyword is in the private data section
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111 def privateData (
+ self ,
+ keyword : str ,
+) -> bool :
+ """Search the private data category for a keyword.
+
+ Args:
+ keyword (str): The keyword to search for
+
+ Returns:
+ (bool): =If the keyword is in the private data section
+ """
+ return keyword . lower () in self . private
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convertData
+
+
+
+
+
+
+
+
Search the convert data category for a keyword.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Check to see if the keyword is in the convert data section
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125 def convertData (
+ self ,
+ keyword : str ,
+) -> bool :
+ """Search the convert data category for a keyword.
+
+ Args:
+ keyword (str): The keyword to search for
+
+ Returns:
+ (bool): Check to see if the keyword is in the convert data section
+ """
+ return keyword . lower () in self . convert
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ignoreData
+
+
+
+
+
+
+
+
Search the convert data category for a ketyword.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Check to see if the keyword is in the ignore data section
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139 def ignoreData (
+ self ,
+ keyword : str ,
+) -> bool :
+ """Search the convert data category for a ketyword.
+
+ Args:
+ keyword (str): The keyword to search for
+
+ Returns:
+ (bool): Check to see if the keyword is in the ignore data section
+ """
+ return keyword . lower () in self . ignore
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getKeyword
+
+
+
+
+
+
+
+
Get the keyword for a value from the yaml file.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ value
+
+ str
+
+
+
+
The value to find the keyword for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
The keyword if found, or None
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158 def getKeyword (
+ self ,
+ value : str ,
+) -> str :
+ """Get the keyword for a value from the yaml file.
+
+ Args:
+ value (str): The value to find the keyword for
+
+ Returns:
+ (str): The keyword if found, or None
+ """
+ key = self . yaml . yaml ( value )
+ if type ( key ) == bool :
+ return value
+ if len ( key ) == 0 :
+ key = self . yaml . getKeyword ( value )
+ return key
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getValues
+
+
+
+
getValues ( keyword = None )
+
+
+
+
+
Get the values for a primary key.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to get the value of
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176 def getValues (
+ self ,
+ keyword : str = None ,
+) -> str :
+ """Get the values for a primary key.
+
+ Args:
+ keyword (str): The keyword to get the value of
+
+ Returns:
+ (str): The values or None
+ """
+ if keyword is not None :
+ if keyword in self . convert :
+ return self . convert [ keyword ]
+ else :
+ return None
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convertEntry
+
+
+
+
convertEntry ( tag , value )
+
+
+
+
+
Convert a tag and value from the ODK represention to an OSM one.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ tag
+
+ str
+
+
+
+
The tag from the ODK XML file
+
+
+
+ required
+
+
+
+ value
+
+ str
+
+
+
+
The value from the ODK XML file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227
+228 def convertEntry (
+ self ,
+ tag : str ,
+ value : str ,
+) -> list :
+ """Convert a tag and value from the ODK represention to an OSM one.
+
+ Args:
+ tag (str): The tag from the ODK XML file
+ value (str): The value from the ODK XML file
+
+ Returns:
+ (list): The converted values
+ """
+ all = list ()
+
+ # If it's not in any conversion data, pass it through unchanged.
+ if tag . lower () in self . ignore :
+ # logging.debug(f"FIXME: Ignoring {tag}")
+ return None
+ low = tag . lower ()
+ if value is None :
+ return low
+
+ if low not in self . convert and low not in self . ignore and low not in self . private :
+ return { tag : value }
+
+ newtag = tag . lower ()
+ newval = value
+ # If the tag is in the config file, convert it.
+ if self . convertData ( newtag ):
+ newtag = self . convertTag ( newtag )
+ # if newtag != tag:
+ # logging.debug(f"Converted Tag for entry {tag} to {newtag}")
+
+ # Truncate the elevation, as it's really long
+ if newtag == "ele" :
+ value = value [: 7 ]
+ newval = self . convertValue ( newtag , value )
+ # logging.debug("Converted Value for entry '%s' to '%s'" % (value, newval))
+ # there can be multiple new tag/value pairs for some values from ODK
+ if type ( newval ) == str :
+ all . append ({ newtag : newval })
+ elif type ( newval ) == list :
+ for entry in newval :
+ if type ( entry ) == str :
+ all . append ({ newtag : newval })
+ elif type ( entry ) == dict :
+ for k , v in entry . items ():
+ all . append ({ k : v })
+ return all
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convertValue
+
+
+
+
convertValue ( tag , value )
+
+
+
+
+
Convert a single tag value.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ tag
+
+ str
+
+
+
+
The tag from the ODK XML file
+
+
+
+ required
+
+
+
+ value
+
+ str
+
+
+
+
The value from the ODK XML file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 230
+231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272 def convertValue (
+ self ,
+ tag : str ,
+ value : str ,
+) -> list :
+ """Convert a single tag value.
+
+ Args:
+ tag (str): The tag from the ODK XML file
+ value (str): The value from the ODK XML file
+
+ Returns:
+ (list): The converted values
+ """
+ all = list ()
+
+ vals = self . getValues ( tag )
+ # There is no conversion data for this tag
+ if vals is None :
+ return value
+
+ if type ( vals ) is dict :
+ if value not in vals :
+ all . append ({ tag : value })
+ return all
+ if type ( vals [ value ]) is bool :
+ entry = dict ()
+ if vals [ value ]:
+ entry [ tag ] = "yes"
+ else :
+ entry [ tag ] = "no"
+ all . append ( entry )
+ return all
+ for item in vals [ value ] . split ( "," ):
+ entry = dict ()
+ tmp = item . split ( "=" )
+ if len ( tmp ) == 1 :
+ entry [ tag ] = vals [ value ]
+ else :
+ entry [ tmp [ 0 ]] = tmp [ 1 ]
+ logging . debug ( " \t Value %s converted value to %s " % ( value , entry ))
+ all . append ( entry )
+ return all
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convertTag
+
+
+
+
+
+
+
+
Convert a single tag.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ tag
+
+ str
+
+
+
+
The tag from the ODK XML file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293
+294
+295
+296
+297
+298
+299
+300
+301
+302
+303
+304 def convertTag (
+ self ,
+ tag : str ,
+) -> str :
+ """Convert a single tag.
+
+ Args:
+ tag (str): The tag from the ODK XML file
+
+ Returns:
+ (str): The new tag
+ """
+ low = tag . lower ()
+ if low in self . convert :
+ newtag = self . convert [ low ]
+ if type ( newtag ) is str :
+ # logging.debug("\tTag '%s' converted tag to '%s'" % (tag, newtag))
+ tmp = newtag . split ( "=" )
+ if len ( tmp ) > 1 :
+ newtag = tmp [ 0 ]
+ elif type ( newtag ) is list :
+ logging . error ( "FIXME: list()" )
+ # epdb.st()
+ return low , value
+ elif type ( newtag ) is dict :
+ # logging.error("FIXME: dict()")
+ return low
+ return newtag . lower ()
+ else :
+ logging . debug ( f "Not in convert!: { low } " )
+ return low
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convertMultiple
+
+
+
+
+
+
+
+
Convert a multiple tags from a select_multiple question..
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ value
+
+ str
+
+
+
+
The tags from the ODK XML file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 306
+307
+308
+309
+310
+311
+312
+313
+314
+315
+316
+317
+318
+319
+320
+321
+322
+323
+324
+325
+326
+327
+328
+329
+330
+331
+332 def convertMultiple (
+ self ,
+ value : str ,
+) -> list :
+ """Convert a multiple tags from a select_multiple question..
+
+ Args:
+ value (str): The tags from the ODK XML file
+
+ Returns:
+ (list): The new tags
+ """
+ tags = dict ()
+ for tag in value . split ( " " ):
+ low = tag . lower ()
+ if self . convertData ( low ):
+ newtag = self . convert [ low ]
+ if newtag . find ( "=" ) > 0 :
+ tmp = newtag . split ( "=" )
+ if tmp [ 0 ] in tags :
+ tags [ tmp [ 0 ]] = f " { tags [ tmp [ 0 ]] } ; { tmp [ 1 ] } "
+ else :
+ tags . update ({ tmp [ 0 ]: tmp [ 1 ]})
+ else :
+ tags . update ({ low : "yes" })
+ # logging.debug(f"\tConverted multiple to {tags}")
+ return tags
+
+
+
+
+
+
+
+
+
+
+
+
+
+ parseXLS
+
+
+
+
+
+
+
+
Parse the source XLSFile if available to look for details we need.
+
+
+ Source code in osm_fieldwork/convert.py
+ 334
+335
+336
+337
+338
+339
+340
+341
+342
+343
+344
+345
+346
+347
+348
+349
+350
+351
+352
+353
+354
+355
+356
+357
+358
+359
+360
+361
+362
+363 def parseXLS (
+ self ,
+ xlsfile : str ,
+):
+ """Parse the source XLSFile if available to look for details we need."""
+ if xlsfile is not None and len ( xlsfile ) > 0 :
+ self . entries = pd . read_excel ( xlsfile , sheet_name = [ 0 ])[ 0 ]
+ # There will only be a single sheet
+ names = self . entries [ "name" ]
+ defaults = self . entries [ "default" ]
+ i = 0
+ while i < len ( self . entries ):
+ if type ( self . entries [ "type" ][ i ]) == float :
+ self . types [ self . entries [ "name" ][ i ]] = None
+ else :
+ self . types [ self . entries [ "name" ][ i ]] = self . entries [ "type" ][ i ] . split ( " " )[ 0 ]
+ i += 1
+ total = len ( names )
+ i = 0
+ while i < total :
+ entry = defaults [ i ]
+ if str ( entry ) != "nan" :
+ pat = re . compile ( "..last-saved.*" )
+ if pat . match ( entry ):
+ name = entry . split ( "#" )[ 1 ][: - 1 ]
+ self . saved [ name ] = None
+ else :
+ self . defaults [ names [ i ]] = entry
+ i += 1
+ return True
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createEntry
+
+
+
+
+
+
+
+
Create the feature data structure.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ entry
+
+ dict
+
+
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The OSM data structure for this entry from the json file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/convert.py
+ 365
+366
+367
+368
+369
+370
+371
+372
+373
+374
+375
+376
+377
+378
+379
+380
+381
+382
+383
+384
+385
+386
+387
+388
+389
+390
+391
+392
+393
+394
+395
+396
+397
+398
+399
+400
+401
+402
+403
+404
+405
+406
+407
+408
+409
+410
+411
+412
+413
+414
+415
+416
+417
+418
+419
+420
+421
+422
+423
+424
+425
+426
+427
+428
+429
+430
+431
+432
+433
+434
+435
+436
+437
+438
+439
+440
+441
+442
+443
+444
+445
+446
+447
+448
+449
+450
+451
+452
+453
+454 def createEntry (
+ self ,
+ entry : dict ,
+) -> dict :
+ """Create the feature data structure.
+
+ Args:
+ entry (dict): The feature data
+
+ Returns:
+ (dict): The OSM data structure for this entry from the json file
+ """
+ # print(line)
+ feature = dict ()
+ attrs = dict ()
+ tags = dict ()
+ priv = dict ()
+ refs = list ()
+
+ # log.debug("Creating entry")
+ # First convert the tag to the approved OSM equivalent
+ if "lat" in entry and "lon" in entry :
+ attrs [ "lat" ] = entry [ "lat" ]
+ attrs [ "lon" ] = entry [ "lon" ]
+ for key , value in entry . items ():
+ attributes = (
+ "id" ,
+ "timestamp" ,
+ "lat" ,
+ "lon" ,
+ "uid" ,
+ "user" ,
+ "version" ,
+ "action" ,
+ )
+
+ if key in self . ignore :
+ continue
+ # When using existing OSM data, there's a special geometry field.
+ # Otherwise use the GPS coordinates where you are.
+ if key == "geometry" and len ( value ) > 0 :
+ geometry = value . split ( " " )
+ if len ( geometry ) == 4 :
+ attrs [ "lat" ] = geometry [ 0 ]
+ attrs [ "lon" ] = geometry [ 1 ]
+ continue
+
+ # if 'lat' in attrs and len(attrs["lat"]) == 0:
+ # continue
+
+ if key is not None and len ( key ) > 0 and key in attributes :
+ attrs [ key ] = value
+ # log.debug("Adding attribute %s with value %s" % (key, value))
+ continue
+ if value is not None and value != "no" and value != "unknown" :
+ if key == "username" :
+ tags [ "user" ] = value
+ continue
+ items = self . convertEntry ( key , value )
+ if key in self . types :
+ if self . types [ key ] == "select_multiple" :
+ vals = self . convertMultiple ( value )
+ if len ( vals ) > 0 :
+ for tag in vals :
+ tags . update ( tag )
+ continue
+ if key == "track" or key == "geoline" :
+ # refs.append(tags)
+ # log.debug("Adding reference %s" % tags)
+ refs = value . split ( ";" )
+ elif type ( value ) != str :
+ if self . privateData ( key ):
+ priv [ key ] = str ( value )
+ else :
+ tags [ key ] = str ( value )
+ elif len ( value ) > 0 :
+ if self . privateData ( key ):
+ priv [ key ] = value
+ else :
+ tags [ key ] = value
+ feature [ "attrs" ] = attrs
+ if len ( tags ) > 0 :
+ # logging.debug(f"TAGS: {tags}")
+ feature [ "tags" ] = tags
+ if len ( refs ) > 1 :
+ feature [ "refs" ] = refs
+ if len ( priv ) > 0 :
+ feature [ "private" ] = priv
+
+ return feature
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dump
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/convert.py
+ 456
+457
+458
+459
+460
+461
+462
+463
+464
+465
+466
+467
+468
+469
+470 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ print ( "YAML file: %s " % self . filespec )
+ print ( "Convert section" )
+ for key , val in self . convert . items ():
+ if type ( val ) is list :
+ print ( " \t Tag %s is" % key )
+ for data in val :
+ print ( " \t\t %r " % data )
+ else :
+ print ( " \t Tag %s is %s " % ( key , val ))
+
+ print ( "Ignore Section" )
+ for item in self . ignore :
+ print ( f " \t Ignoring tag { item } " )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/filter_data/index.html b/api/filter_data/index.html
new file mode 100644
index 000000000..2ba490468
--- /dev/null
+++ b/api/filter_data/index.html
@@ -0,0 +1,1818 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ filter_data - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+filter_data.py
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ FilterData
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/filter_data.py
+ 40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54 def __init__ (
+ self ,
+ filespec : str = None ,
+ config : QueryConfig = None ,
+):
+ """Args:
+ filespec (str): The optional data file to read.
+
+ Returns:
+ (FilterData): An instance of this object
+ """
+ self . tags = dict ()
+ self . qc = config
+ if filespec and config :
+ self . parse ( filespec , config )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ parse
+
+
+
+
parse ( filespec , config )
+
+
+
+
+
Read in the XLSForm and extract the data we want.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The filespec to the XLSForm file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+title
+ str
+
+
+
+
The title from the XLSForm Setting sheet
+
+
+
+
+extract
+ str
+
+
+
+
The data extract filename from the XLSForm Survey sheet
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/filter_data.py
+ 56
+ 57
+ 58
+ 59
+ 60
+ 61
+ 62
+ 63
+ 64
+ 65
+ 66
+ 67
+ 68
+ 69
+ 70
+ 71
+ 72
+ 73
+ 74
+ 75
+ 76
+ 77
+ 78
+ 79
+ 80
+ 81
+ 82
+ 83
+ 84
+ 85
+ 86
+ 87
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123 def parse (
+ self ,
+ filespec : str ,
+ config : QueryConfig ,
+):
+ """Read in the XLSForm and extract the data we want.
+
+ Args:
+ filespec (str): The filespec to the XLSForm file
+
+ Returns:
+ title (str): The title from the XLSForm Setting sheet
+ extract (str): The data extract filename from the XLSForm Survey sheet
+ """
+ if config :
+ self . qc = config
+ excel_object = pd . ExcelFile ( filespec )
+ entries = excel_object . parse ( sheet_name = [ 0 , 1 , 2 ], index_col = 0 , usercols = [ 0 , 1 , 2 ])
+ entries = pd . read_excel ( filespec , sheet_name = [ 0 , 1 , 2 ])
+ title = entries [ 2 ][ "form_title" ] . to_list ()[ 0 ]
+ extract = ""
+ for entry in entries [ 0 ][ "type" ]:
+ if str ( entry ) == "nan" :
+ continue
+ if entry [: 20 ] == "select_one_from_file" :
+ extract = entry [ 21 :]
+ log . info ( f 'Got data extract filename: " { extract } ", title: " { title } "' )
+ else :
+ extract = "none"
+ total = len ( entries [ 1 ][ "list_name" ])
+ index = 1
+ while index < total :
+ key = entries [ 1 ][ "list_name" ][ index ]
+ if key == "model" or str ( key ) == "nan" :
+ index += 1
+ continue
+ value = entries [ 1 ][ "name" ][ index ]
+ if value == "<text>" or str ( value ) == "null" :
+ index += 1
+ continue
+ if key not in self . tags :
+ self . tags [ key ] = list ()
+ self . tags [ key ] . append ( value )
+ index += 1
+
+ # The yaml config file for the query has a list of columns
+ # to keep in addition to this default set. These wind up
+ # in the SELECT
+ keep = (
+ "name" ,
+ "name:en" ,
+ "id" ,
+ "operator" ,
+ "addr:street" ,
+ "addr:housenumber" ,
+ "osm_id" ,
+ "title" ,
+ "tags" ,
+ "label" ,
+ "landuse" ,
+ "opening_hours" ,
+ "tourism" ,
+ )
+ self . keep = list ( keep )
+ if "keep" in config . config [ "keep" ]:
+ self . keep . extend ( config . config [ "keep" ])
+
+ return title , extract
+
+
+
+
+
+
+
+
+
+
+
+
+
+ cleanData
+
+
+
+
+
+
+
+
Filter out any data not in the data_model.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ data
+
+ bytes
+
+
+
+
The input data or filespec to the input data file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ FeatureCollection
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/filter_data.py
+ 125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206 def cleanData (
+ self ,
+ data ,
+):
+ """Filter out any data not in the data_model.
+
+ Args:
+ data (bytes): The input data or filespec to the input data file
+
+ Returns:
+ (FeatureCollection): The modifed data
+
+ """
+ log . debug ( "Cleaning data..." )
+ if type ( data ) == str :
+ outfile = open ( f "new- { data } " , "x" )
+ infile = open ( tmpfile , "r" )
+ indata = geojson . load ( infile )
+ elif type ( data ) == bytes :
+ indata = eval ( data . decode ())
+ else :
+ indata = data
+ # these just create noise in the log file
+ ignore = (
+ "timestamp" ,
+ "version" ,
+ "changeset" ,
+ )
+ keep = ( "osm_id" , "id" , "version" )
+ collection = list ()
+ for feature in indata [ "features" ]:
+ # log.debug(f"FIXME0: {feature}")
+ properties = dict ()
+ for key , value in feature [ "properties" ] . items ():
+ # log.debug(f"{key} = {value}")
+ # FIXME: this is a hack!
+ if True :
+ if key == "tags" :
+ for k , v in value . items ():
+ if k [: 4 ] == "name" :
+ properties [ "title" ] = value [ k ]
+ properties [ "label" ] = value [ k ]
+ else :
+ properties [ k ] = v
+ else :
+ if key == "osm_id" :
+ properties [ "id" ] = value
+ properties [ "title" ] = value
+ properties [ "label" ] = value
+ else :
+ properties [ key ] = value
+ if key [: 4 ] == "name" :
+ properties [ "title" ] = value
+ properties [ "label" ] = value
+ else :
+ log . debug ( f "FIXME2: { key } = { value } " )
+ if key in keep :
+ properties [ key ] = value
+ continue
+ if key in self . tags :
+ if key == "name" or key == "name:en" :
+ properties [ "title" ] = self . tags [ key ]
+ properties [ "label" ] = self . tags [ key ]
+ if value in self . tags [ key ]:
+ properties [ key ] = value
+ else :
+ if value != "yes" :
+ log . warning ( f "Value { value } not in the data model!" )
+ continue
+ else :
+ if key in ignore :
+ continue
+ log . warning ( f "Tag { key } not in the data model!" )
+ continue
+ if "title" not in properties :
+ properties [ "label" ] = properties [ "id" ]
+ properties [ "title" ] = properties [ "id" ]
+ newfeature = Feature ( geometry = feature [ "geometry" ], properties = properties )
+ collection . append ( newfeature )
+ if type ( data ) == str :
+ geojson . dump ( FeatureCollection ( collection ), outfile )
+ return FeatureCollection ( collection )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/make_data_extract/index.html b/api/make_data_extract/index.html
new file mode 100644
index 000000000..e17962002
--- /dev/null
+++ b/api/make_data_extract/index.html
@@ -0,0 +1,1798 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ make_data_extract - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Get the categories and associated XLSFiles from the config file.
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
A list of the XLSForms included in osm-fieldwork
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/make_data_extract.py
+ 42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55 def getChoices ():
+ """Get the categories and associated XLSFiles from the config file.
+
+ Returns:
+ (list): A list of the XLSForms included in osm-fieldwork
+ """
+ data = dict ()
+ if os . path . exists ( f " { data_models_path } /category.yaml" ):
+ file = open ( f " { data_models_path } /category.yaml" , "r" ) . read ()
+ contents = yaml . load ( file , Loader = yaml . Loader )
+ for entry in contents :
+ [[ k , v ]] = entry . items ()
+ data [ k ] = v [ 0 ]
+ return data
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
Class to handle SQL queries for the categories.
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ dburi
+
+ str
+
+
+
+
The URI string for the database connection
+
+
+
+ required
+
+
+
+ config
+
+ str
+
+
+
+
The filespec for the query config file
+
+
+
+ required
+
+
+
+ xlsfile
+
+ str
+
+
+
+
The filespec for the XLSForm file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ MakeExtract
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/make_data_extract.py
+ 61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85 def __init__ (
+ self ,
+ dburi : str ,
+ config : str ,
+ xlsfile : str ,
+):
+ """Initialize the postgres handler.
+
+ Args:
+ dburi (str): The URI string for the database connection
+ config (str): The filespec for the query config file
+ xlsfile (str): The filespec for the XLSForm file
+
+ Returns:
+ (MakeExtract): An instance of this object
+ """
+ self . db = PostgresClient ( dburi , f " { data_models_path } / { config } .yaml" )
+
+ # Read in the XLSFile
+ if "/" in xlsfile :
+ file = open ( xlsfile , "rb" )
+ else :
+ file = open ( f " { xlsforms_path } / { xlsfile } " , "rb" )
+ self . xls = BytesIO ( file . read ())
+ self . config = QueryConfig ( config )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
getFeatures ( boundary , polygon )
+
+
+
+
+
Extract features from Postgres.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ boundary
+
+ str
+
+
+
+
The filespec for the project AOI in GeoJson format
+
+
+
+ required
+
+
+
+ filespec
+
+ str
+
+
+
+
The optional output file for the query
+
+
+
+ required
+
+
+
+ polygon
+
+ bool
+
+
+
+
Whether to have the full geometry or just centroids returns
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ FeatureCollection
+
+
+
+
The features returned from the query
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/make_data_extract.py
+ 87
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114 def getFeatures (
+ self ,
+ boundary : FeatureCollection ,
+ polygon : bool ,
+):
+ """Extract features from Postgres.
+
+ Args:
+ boundary (str): The filespec for the project AOI in GeoJson format
+ filespec (str): The optional output file for the query
+ polygon (bool): Whether to have the full geometry or just centroids returns
+
+ Returns:
+ (FeatureCollection): The features returned from the query
+ """
+ log . info ( "Extracting features from Postgres..." )
+
+ if "features" in boundary :
+ poly = boundary [ "features" ][ 0 ][ "geometry" ]
+ else :
+ poly = boundary [ "geometry" ]
+ shape ( poly )
+
+ collection = self . db . execQuery ( boundary , None , False )
+ if not collection :
+ return None
+
+ return collection
+
+
+
+
+
+
+
+
+
+
+
+
+
+
cleanFeatures ( collection )
+
+
+
+
+
Filter out any data not in the data_model.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ collection
+
+ bytes
+
+
+
+
The input data or filespec to the input data file
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ FeatureCollection
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/make_data_extract.py
+ 116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135 def cleanFeatures (
+ self ,
+ collection : FeatureCollection ,
+):
+ """Filter out any data not in the data_model.
+
+ Args:
+ collection (bytes): The input data or filespec to the input data file
+
+ Returns:
+ (FeatureCollection): The modifed data
+
+ """
+ log . debug ( "Cleaning features" )
+ cleaned = FilterData ()
+ cleaned . parse ( self . xls , self . config )
+ new = cleaned . cleanData ( collection )
+ # jsonfile = open(filespec, "w")
+ # dump(new, jsonfile)
+ return new
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/odk2osm/index.html b/api/odk2osm/index.html
new file mode 100644
index 000000000..51f713679
--- /dev/null
+++ b/api/odk2osm/index.html
@@ -0,0 +1,1358 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ odk2osm.py - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+odk2osm.py
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ main
+
+
+
+
+
+
+
+
This is a program that reads in the ODK Instance file, which is in XML,
+and converts it to an OSM XML file so it can be viewed in an editor.
+
+
+ Source code in osm_fieldwork/odk2osm.py
+ 34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86
+87
+88
+89
+90
+91
+92
+93 def main ():
+ """This is a program that reads in the ODK Instance file, which is in XML,
+ and converts it to an OSM XML file so it can be viewed in an editor.
+ """
+ parser = argparse . ArgumentParser ( description = "Convert ODK XML instance file to OSM XML format" )
+ parser . add_argument ( "-v" , "--verbose" , nargs = "?" , const = "0" , help = "verbose output" )
+ parser . add_argument ( "-y" , "--yaml" , help = "Alternate YAML file" )
+ parser . add_argument ( "-x" , "--xlsfile" , help = "Source XLSFile" )
+ parser . add_argument ( "-i" , "--infile" , required = True , help = "The input file" )
+ # parser.add_argument("-o","--outfile", default='tmp.csv', help='The output file for JOSM')
+ args = parser . parse_args ()
+
+ # if verbose, dump to the terminal
+ if args . verbose is not None :
+ logging . basicConfig (
+ level = logging . DEBUG ,
+ format = ( " %(threadName)10s - %(name)s - %(levelname)s - %(message)s " ),
+ datefmt = "%y-%m- %d %H:%M:%S" ,
+ stream = sys . stdout ,
+ )
+
+ toplevel = Path ( args . infile )
+ odk = ODKParsers ( args . yaml )
+ odk . parseXLS ( args . xlsfile )
+ out = OutSupport ()
+ xmlfiles = list ()
+ data = list ()
+ # It's a wildcard, used for XML instance files
+ if args . infile . find ( "*" ) >= 0 :
+ log . debug ( f "Parsing multiple ODK XML files { args . infile } " )
+ toplevel = Path ( args . infile [: - 1 ])
+ for dirs in glob . glob ( args . infile ):
+ xml = os . listdir ( dirs )
+ full = os . path . join ( dirs , xml [ 0 ])
+ xmlfiles . append ( full )
+ for infile in xmlfiles :
+ tmp = odk . XMLparser ( infile )
+ entry = odk . createEntry ( tmp [ 0 ])
+ data . append ( entry )
+ elif toplevel . suffix == ".xml" :
+ # It's an instance file from ODK Collect
+ log . debug ( f "Parsing ODK XML files { args . infile } " )
+ # There is always only one XML file per infile
+ full = os . path . join ( toplevel , os . path . basename ( toplevel ))
+ xmlfiles . append ( full + ".xml" )
+ tmp = odk . XMLparser ( args . infile )
+ # odki = ODKInstance(filespec=args.infile, yaml=args.yaml)
+ entry = odk . createEntry ( tmp )
+ data . append ( entry )
+ elif toplevel . suffix == ".csv" :
+ log . debug ( f "Parsing csv files { args . infile } " )
+ for entry in odk . CSVparser ( args . infile ):
+ data . append ( odk . createEntry ( entry ))
+ elif toplevel . suffix == ".json" :
+ log . debug ( f "Parsing json files { args . infile } " )
+ for entry in odk . JSONparser ( args . infile ):
+ data . append ( odk . createEntry ( entry ))
+
+ # Write the data
+ out . WriteData ( toplevel . stem , data )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/osmfile/index.html b/api/osmfile/index.html
new file mode 100644
index 000000000..49f587f01
--- /dev/null
+++ b/api/osmfile/index.html
@@ -0,0 +1,2984 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ osmfile - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+osmfile.py
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
OSM File output.
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The input or output file
+
+
+
+ None
+
+
+
+ options
+
+ dict
+
+
+
+
+
+ None
+
+
+
+ outdir
+
+ str
+
+
+
+
The output directory for the file
+
+
+
+ '/tmp/'
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OsmFile
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80 def __init__ (
+ self ,
+ filespec : str = None ,
+ options : dict = None ,
+ outdir : str = "/tmp/" ,
+):
+ """This class reads and writes the OSM XML formated files.
+
+ Args:
+ filespec (str): The input or output file
+ options (dict): Command line options
+ outdir (str): The output directory for the file
+
+ Returns:
+ (OsmFile): An instance of this object
+ """
+ if options is None :
+ options = dict ()
+ self . options = options
+ # Read the config file to get our OSM credentials, if we have any
+ # self.config = config.config(self.options)
+ self . version = 3
+ self . visible = "true"
+ self . osmid = - 1
+ # Open the OSM output file
+ self . file = None
+ if filespec is not None :
+ self . file = open ( filespec , "w" )
+ # self.file = open(filespec + ".osm", 'w')
+ logging . info ( "Opened output file: " + filespec )
+ self . header ()
+ # logging.error("Couldn't open %s for writing!" % filespec)
+
+ # This is the file that contains all the filtering data
+ # self.ctable = convfile(self.options.get('convfile'))
+ # self.options['convfile'] = None
+ # These are for importing the CO addresses
+ self . full = None
+ self . addr = None
+ # decrement the ID
+ self . start = - 1
+ # path = xlsforms_path.replace("xlsforms", "")
+ self . convert = Convert ()
+ self . data = list ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ isclosed
+
+
+
+
+
+
+
+
Is the OSM XML file open or closed ?
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
The OSM XML file status
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 87
+88
+89
+90
+91
+92
+93 def isclosed ( self ):
+ """Is the OSM XML file open or closed ?
+
+ Returns:
+ (bool): The OSM XML file status
+ """
+ return self . file . closed
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Write the header of the OSM XML file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101 def header ( self ):
+ """Write the header of the OSM XML file."""
+ if self . file is not None :
+ self . file . write ( "<?xml version='1.0' encoding='UTF-8'?> \n " )
+ # self.file.write('<osm version="0.6" generator="osm-fieldowrk 0.3" timestamp="2017-03-13T21:43:02Z">\n')
+ self . file . write ( '<osm version="0.6" generator="osm-fieldwork 0.3"> \n ' )
+ self . file . flush ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Write the footer of the OSM XML file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 103
+104
+105
+106
+107
+108
+109
+110
+111 def footer ( self ):
+ """Write the footer of the OSM XML file."""
+ # logging.debug("FIXME: %r" % self.file)
+ if self . file is not None :
+ self . file . write ( "</osm> \n " )
+ self . file . flush ()
+ if self . file is False :
+ self . file . close ()
+ self . file = None
+
+
+
+
+
+
+
+
+
+
+
+
+
+ write
+
+
+
+
+
+
+
+
Write the data to the OSM XML file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123 def write (
+ self ,
+ data = None ,
+):
+ """Write the data to the OSM XML file."""
+ if type ( data ) == list :
+ if data is not None :
+ for line in data :
+ self . file . write ( " %s \n " % line )
+ else :
+ self . file . write ( " %s \n " % data )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createWay
+
+
+
+
createWay ( way , modified = False )
+
+
+
+
+
This creates a string that is the OSM representation of a node.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ way
+
+ dict
+
+
+
+
The input way data structure
+
+
+
+ required
+
+
+
+ modified
+
+ bool
+
+
+
+
Is this a modified feature ?
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220 def createWay (
+ self ,
+ way : dict ,
+ modified : bool = False ,
+):
+ """This creates a string that is the OSM representation of a node.
+
+ Args:
+ way (dict): The input way data structure
+ modified (bool): Is this a modified feature ?
+
+ Returns:
+ (str): The OSM XML entry
+ """
+ attrs = dict ()
+ osm = ""
+
+ # Add default attributes
+ if modified :
+ attrs [ "action" ] = "modify"
+ if "osm_way_id" in way [ "attrs" ]:
+ attrs [ "id" ] = int ( way [ "attrs" ][ "osm_way_id" ])
+ elif "osm_id" in way [ "attrs" ]:
+ attrs [ "id" ] = int ( way [ "attrs" ][ "osm_id" ])
+ elif "id" in way [ "attrs" ]:
+ attrs [ "id" ] = int ( way [ "attrs" ][ "id" ])
+ else :
+ attrs [ "id" ] = self . start
+ self . start -= 1
+ if "version" not in way [ "attrs" ]:
+ attrs [ "version" ] = 1
+ else :
+ attrs [ "version" ] = way [ "attrs" ][ "version" ]
+ attrs [ "timestamp" ] = datetime . now () . strftime ( "%Y-%m- %d T%TZ" )
+ # If the resulting file is publicly accessible without authentication, The GDPR applies
+ # and the identifying fields should not be included
+ if "uid" in way [ "attrs" ]:
+ attrs [ "uid" ] = way [ "attrs" ][ "uid" ]
+ if "user" in way [ "attrs" ]:
+ attrs [ "user" ] = way [ "attrs" ][ "user" ]
+
+ # Make all the nodes first. The data in the track has 4 fields. The first two
+ # are the lat/lon, then the altitude, and finally the GPS accuracy.
+ # newrefs = list()
+ node = dict ()
+ node [ "attrs" ] = dict ()
+ # The geometry is an EWKT string, so there is no need to get fancy with
+ # geometries, just manipulate the string, as OSM XML it's only strings
+ # anyway.
+ # geom = way['geom'][19:][:-2]
+ # print(geom)
+ # points = geom.split(",")
+ # print(points)
+
+ # epdb.st()
+ # loop = 0
+ # while loop < len(way['refs']):
+ # #print(f"{points[loop]} {way['refs'][loop]}")
+ # node['timestamp'] = attrs['timestamp']
+ # if 'user' in attrs and attrs['user'] is not None:
+ # node['attrs']['user'] = attrs['user']
+ # if 'uid' in attrs and attrs['uid'] is not None:
+ # node['attrs']['uid'] = attrs['uid']
+ # node['version'] = 0
+ # lat,lon = points[loop].split(' ')
+ # node['attrs']['lat'] = lat
+ # node['attrs']['lon'] = lon
+ # node['attrs']['id'] = way['refs'][loop]
+ # osm += self.createNode(node) + '\n'
+ # loop += 1
+
+ # Processs atrributes
+ line = ""
+ for ref , value in attrs . items ():
+ line += " %s = %r " % ( ref , str ( value ))
+ osm += " <way " + line + ">"
+
+ if "refs" in way :
+ for ref in way [ "refs" ]:
+ osm += ' \n <nd ref=" %s "/>' % ref
+ if "tags" in way :
+ for key , value in way [ "tags" ] . items ():
+ if value is None :
+ continue
+ if key == "track" :
+ continue
+ if key not in attrs :
+ newkey = escape ( key )
+ newval = escape ( str ( value ))
+ osm += f " \n <tag k=' { newkey } ' v=' { newval } '/>"
+ if modified :
+ osm += ' \n <tag k="note" v="Do not upload this without validation!"/>'
+ osm += " \n "
+ osm += " </way> \n "
+
+ return osm
+
+
+
+
+
+
+
+
+
+
+
+
+
+ featureToNode
+
+
+
+
+
+
+
+
Convert a GeoJson feature into the data structures used here.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ feature
+
+ dict
+
+
+
+
The GeoJson feature to convert
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The data structure used by this file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 222
+223
+224
+225
+226
+227
+228
+229
+230
+231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248 def featureToNode (
+ self ,
+ feature : dict ,
+):
+ """Convert a GeoJson feature into the data structures used here.
+
+ Args:
+ feature (dict): The GeoJson feature to convert
+
+ Returns:
+ (dict): The data structure used by this file
+ """
+ osm = dict ()
+ ignore = ( "label" , "title" )
+ tags = dict ()
+ attrs = dict ()
+ for tag , value in feature [ "properties" ] . items ():
+ if tag == "id" :
+ attrs [ "osm_id" ] = value
+ elif tag not in ignore :
+ tags [ tag ] = value
+ coords = feature [ "geometry" ][ "coordinates" ]
+ attrs [ "lat" ] = coords [ 1 ]
+ attrs [ "lon" ] = coords [ 0 ]
+ osm [ "attrs" ] = attrs
+ osm [ "tags" ] = tags
+ return osm
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createNode
+
+
+
+
createNode ( node , modified = False )
+
+
+
+
+
This creates a string that is the OSM representation of a node.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ node
+
+ dict
+
+
+
+
The input node data structure
+
+
+
+ required
+
+
+
+ modified
+
+ bool
+
+
+
+
Is this a modified feature ?
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293
+294
+295
+296
+297
+298
+299
+300
+301
+302
+303
+304
+305
+306
+307
+308 def createNode (
+ self ,
+ node : dict ,
+ modified : bool = False ,
+):
+ """This creates a string that is the OSM representation of a node.
+
+ Args:
+ node (dict): The input node data structure
+ modified (bool): Is this a modified feature ?
+
+ Returns:
+ (str): The OSM XML entry
+ """
+ attrs = dict ()
+ # Add default attributes
+ if modified :
+ attrs [ "action" ] = "modify"
+
+ if "id" in node [ "attrs" ]:
+ attrs [ "id" ] = int ( node [ "attrs" ][ "id" ])
+ else :
+ attrs [ "id" ] = self . start
+ self . start -= 1
+ if "version" not in node [ "attrs" ]:
+ attrs [ "version" ] = "1"
+ else :
+ attrs [ "version" ] = int ( node [ "attrs" ][ "version" ]) + 1
+ attrs [ "lat" ] = node [ "attrs" ][ "lat" ]
+ attrs [ "lon" ] = node [ "attrs" ][ "lon" ]
+ attrs [ "timestamp" ] = datetime . now () . strftime ( "%Y-%m- %d T%TZ" )
+ # If the resulting file is publicly accessible without authentication, THE GDPR applies
+ # and the identifying fields should not be included
+ if "uid" in node [ "attrs" ]:
+ attrs [ "uid" ] = node [ "attrs" ][ "uid" ]
+ if "user" in node [ "attrs" ]:
+ attrs [ "user" ] = node [ "attrs" ][ "user" ]
+
+ # Processs atrributes
+ line = ""
+ osm = ""
+ for ref , value in attrs . items ():
+ line += " %s = %r " % ( ref , str ( value ))
+ osm += " <node " + line
+
+ if "tags" in node :
+ osm += ">"
+ for key , value in node [ "tags" ] . items ():
+ if not value :
+ continue
+ if key not in attrs :
+ newkey = escape ( key )
+ newval = escape ( str ( value ))
+ osm += f " \n <tag k=' { newkey } ' v=' { newval } '/>"
+ osm += " \n </node> \n "
+ else :
+ osm += "/>"
+
+ return osm
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createTag
+
+
+
+
createTag ( field , value )
+
+
+
+
+
Create a data structure for an OSM feature tag.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ field
+
+ str
+
+
+
+
+
+ required
+
+
+
+ value
+
+ str
+
+
+
+
The value for the tag
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The newly created tag pair
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 310
+311
+312
+313
+314
+315
+316
+317
+318
+319
+320
+321
+322
+323
+324
+325
+326
+327
+328
+329
+330
+331
+332
+333
+334
+335
+336
+337 def createTag (
+ self ,
+ field : str ,
+ value : str ,
+):
+ """Create a data structure for an OSM feature tag.
+
+ Args:
+ field (str): The tag name
+ value (str): The value for the tag
+
+ Returns:
+ (dict): The newly created tag pair
+ """
+ newval = str ( value )
+ newval = newval . replace ( "&" , "and" )
+ newval = newval . replace ( '"' , "" )
+ tag = dict ()
+ # logging.debug("OSM:makeTag(field=%r, value=%r)" % (field, newval))
+
+ newtag = field
+ change = newval . split ( "=" )
+ if len ( change ) > 1 :
+ newtag = change [ 0 ]
+ newval = change [ 1 ]
+
+ tag [ newtag ] = newval
+ return tag
+
+
+
+
+
+
+
+
+
+
+
+
+
+ loadFile
+
+
+
+
+
+
+
+
Read a OSM XML file generated by osm_fieldwork.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ osmfile
+
+ str
+
+
+
+
The OSM XML file to load
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
The entries in the OSM XML file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 339
+340
+341
+342
+343
+344
+345
+346
+347
+348
+349
+350
+351
+352
+353
+354
+355
+356
+357
+358
+359
+360
+361
+362
+363
+364
+365
+366
+367
+368
+369
+370
+371
+372
+373
+374
+375
+376
+377
+378
+379
+380
+381
+382
+383
+384
+385
+386
+387
+388
+389
+390
+391
+392
+393
+394
+395
+396
+397
+398
+399
+400
+401
+402
+403
+404
+405
+406
+407
+408
+409 def loadFile (
+ self ,
+ osmfile : str ,
+):
+ """Read a OSM XML file generated by osm_fieldwork.
+
+ Args:
+ osmfile (str): The OSM XML file to load
+
+ Returns:
+ (list): The entries in the OSM XML file
+ """
+ size = os . path . getsize ( osmfile )
+ with open ( osmfile , "r" ) as file :
+ xml = file . read ( size )
+ doc = xmltodict . parse ( xml )
+ if "osm" not in doc :
+ logging . warning ( "No data in this instance" )
+ return False
+ data = doc [ "osm" ]
+ if "node" not in data :
+ logging . warning ( "No nodes in this instance" )
+ return False
+
+ for node in data [ "node" ]:
+ attrs = {
+ "id" : int ( node [ "@id" ]),
+ "lat" : node [ "@lat" ][: 10 ],
+ "lon" : node [ "@lon" ][: 10 ],
+ }
+ if "@timestamp" in node :
+ attrs [ "timestamp" ] = node [ "@timestamp" ]
+
+ tags = dict ()
+ if "tag" in node :
+ for tag in node [ "tag" ]:
+ if type ( tag ) == dict :
+ tags [ tag [ "@k" ]] = tag [ "@v" ] . strip ()
+ # continue
+ else :
+ tags [ node [ "tag" ][ "@k" ]] = node [ "tag" ][ "@v" ] . strip ()
+ # continue
+ node = { "attrs" : attrs , "tags" : tags }
+ self . data . append ( node )
+
+ for way in data [ "way" ]:
+ attrs = {
+ "id" : int ( way [ "@id" ]),
+ }
+ refs = list ()
+ if len ( way [ "nd" ]) > 0 :
+ for ref in way [ "nd" ]:
+ refs . append ( int ( ref [ "@ref" ]))
+
+ if "@timestamp" in node :
+ attrs [ "timestamp" ] = node [ "@timestamp" ]
+
+ tags = dict ()
+ if "tag" in way :
+ for tag in way [ "tag" ]:
+ if type ( tag ) == dict :
+ tags [ tag [ "@k" ]] = tag [ "@v" ] . strip ()
+ # continue
+ else :
+ if len ( node [ "tags" ]) > 0 :
+ tags [ node [ "tags" ][ "@k" ]] = node [ "tags" ][ "@v" ] . strip ()
+ # continue
+ way = { "attrs" : attrs , "refs" : refs , "tags" : tags }
+ self . data . append ( way )
+
+ return self . data
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dump
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 411
+412
+413
+414
+415
+416
+417 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ for _id , item in self . data . items ():
+ for k , v in item [ "attrs" ] . items ():
+ print ( f " { k } = { v } " )
+ for k , v in item [ "tags" ] . items ():
+ print ( f " \t { k } = { v } " )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getFeature
+
+
+
+
+
+
+
+
Get the data for a feature from the loaded OSM data file.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ id
+
+ int
+
+
+
+
The ID to retrieve the feasture of
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The feature for this ID or None
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 419
+420
+421
+422
+423
+424
+425
+426
+427
+428
+429
+430
+431 def getFeature (
+ self ,
+ id : int ,
+):
+ """Get the data for a feature from the loaded OSM data file.
+
+ Args:
+ id (int): The ID to retrieve the feasture of
+
+ Returns:
+ (dict): The feature for this ID or None
+ """
+ return self . data [ id ]
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getFields
+
+
+
+
+
+
+
+
Extract all the tags used in this file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 433
+434
+435
+436
+437
+438
+439
+440 def getFields ( self ):
+ """Extract all the tags used in this file."""
+ fields = list ()
+ for _id , item in self . data . items ():
+ keys = list ( item [ "tags" ] . keys ())
+ for key in keys :
+ if key not in fields :
+ fields . append ( key )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/parsers/index.html b/api/parsers/index.html
new file mode 100644
index 000000000..d43344bda
--- /dev/null
+++ b/api/parsers/index.html
@@ -0,0 +1,2031 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ OdkParsers - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+OdkParsers
+
+
+
+
+
+
+
+
+
+ Bases: Convert
+
+
+
A class to parse the CSV files from ODK Central.
+
+
+ Source code in osm_fieldwork/parsers.py
+ 42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61 def __init__ (
+ self ,
+ yaml : str = None ,
+):
+ self . fields = dict ()
+ self . nodesets = dict ()
+ self . data = list ()
+ self . osm = None
+ self . json = None
+ self . features = list ()
+ xlsforms_path . replace ( "xlsforms" , "" )
+ if yaml :
+ pass
+ else :
+ pass
+ self . config = super () . __init__ ( yaml )
+ self . saved = dict ()
+ self . defaults = dict ()
+ self . entries = dict ()
+ self . types = dict ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ CSVparser
+
+
+
+
CSVparser ( filespec , data = None )
+
+
+
+
+
Parse the CSV file from ODK Central and convert it to a data structure.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
+
+ required
+
+
+
+ data
+
+ str
+
+
+
+
Or the data to parse.
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
The list of features with tags
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/parsers.py
+ 63
+ 64
+ 65
+ 66
+ 67
+ 68
+ 69
+ 70
+ 71
+ 72
+ 73
+ 74
+ 75
+ 76
+ 77
+ 78
+ 79
+ 80
+ 81
+ 82
+ 83
+ 84
+ 85
+ 86
+ 87
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103
+104
+105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135 def CSVparser (
+ self ,
+ filespec : str ,
+ data : str = None ,
+) -> list :
+ """Parse the CSV file from ODK Central and convert it to a data structure.
+
+ Args:
+ filespec (str): The file to parse.
+ data (str): Or the data to parse.
+
+ Returns:
+ (list): The list of features with tags
+ """
+ all_tags = list ()
+ if not data :
+ f = open ( filespec , newline = "" )
+ reader = csv . DictReader ( f , delimiter = "," )
+ else :
+ reader = csv . DictReader ( data , delimiter = "," )
+ for row in reader :
+ tags = dict ()
+ # log.info(f"ROW: {row}")
+ for keyword , value in row . items ():
+ if keyword is None or value is None :
+ continue
+ if len ( value ) == 0 :
+ continue
+ base = basename ( keyword ) . lower ()
+ # There's many extraneous fields in the input file which we don't need.
+ if base is None or base in self . ignore or value is None :
+ continue
+ else :
+ # log.info(f"ITEM: {keyword} = {value}")
+ if base in self . types :
+ if self . types [ base ] == "select_multiple" :
+ vals = self . convertMultiple ( value )
+ if len ( vals ) > 0 :
+ tags . update ( vals )
+ continue
+ # When using geopoint warmup, once the display changes to the map
+
+ # location, there is not always a value if the accuracy is way
+ # off. In this case use the warmup value, which is where we are
+ # hopefully standing anyway.
+ if base == "latitude" and len ( value ) == 0 :
+ if "warmup-Latitude" in row :
+ value = row [ "warmup-Latitude" ]
+ if base == "longitude" and len ( value ) == 0 :
+ value = row [ "warmup-Longitude" ]
+ items = self . convertEntry ( base , value )
+ # log.info(f"ROW: {base} {value}")
+ if len ( items ) > 0 :
+ if base in self . saved :
+ if str ( value ) == "nan" or len ( value ) == 0 :
+ # log.debug(f"FIXME: {base} {value}")
+ val = self . saved [ base ]
+ if val and len ( value ) == 0 :
+ log . warning ( f 'Using last saved value for " { base } "! Now " { val } "' )
+ value = val
+ else :
+ self . saved [ base ] = value
+ log . debug ( f 'Updating last saved value for " { base } " with " { value } "' )
+ # Handle nested dict in list
+ if isinstance ( items , list ):
+ items = items [ 0 ]
+ for k , v in items . items ():
+ tags [ k ] = v
+ else :
+ tags [ base ] = value
+ # log.debug(f"\tFIXME1: {tags}")
+ all_tags . append ( tags )
+ return all_tags
+
+
+
+
+
+
+
+
+
+
+
+
+
+ JSONparser
+
+
+
+
JSONparser ( filespec = None , data = None )
+
+
+
+
+
Parse the JSON file from ODK Central and convert it to a data structure.
+The input is either a filespec to open, or the data itself.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The JSON or GeoJson input file to convert
+
+
+
+ None
+
+
+
+ data
+
+ str
+
+
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
A list of all the features in the input file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/parsers.py
+ 137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227
+228
+229
+230 def JSONparser (
+ self ,
+ filespec : str = None ,
+ data : str = None ,
+) -> list :
+ """Parse the JSON file from ODK Central and convert it to a data structure.
+ The input is either a filespec to open, or the data itself.
+
+ Args:
+ filespec (str): The JSON or GeoJson input file to convert
+ data (str): The data to convert
+
+ Returns:
+ (list): A list of all the features in the input file
+ """
+ log . debug ( f "Parsing JSON file { filespec } " )
+ total = list ()
+ if not data :
+ file = open ( filespec , "r" )
+ infile = Path ( filespec )
+ if infile . suffix == ".geojson" :
+ reader = geojson . load ( file )
+ elif infile . suffix == ".json" :
+ reader = json . load ( file )
+ else :
+ log . error ( "Need to specify a JSON or GeoJson file!" )
+ return total
+ elif isinstance ( data , str ):
+ reader = geojson . loads ( data )
+ elif isinstance ( data , list ):
+ reader = data
+
+ # JSON files from Central use value as the keyword, whereas
+ # GeoJSON uses features for the same thing.
+ if "value" in reader :
+ data = reader [ "value" ]
+ elif "features" in reader :
+ data = reader [ "features" ]
+ else :
+ data = reader
+ for row in data :
+ # log.debug(f"ROW: {row}\n")
+ tags = dict ()
+ if "properties" in row :
+ row [ "properties" ] # A GeoJson formatted file
+ else :
+ pass # A JOSM file from ODK Central
+
+ # flatten all the groups into a sodk2geojson.pyingle data structure
+ flattened = flatdict . FlatDict ( row )
+ # log.debug(f"FLAT: {flattened}\n")
+ for k , v in flattened . items ():
+ last = k . rfind ( ":" ) + 1
+ key = k [ last :]
+ # a JSON file from ODK Central always uses coordinates as
+ # the keyword
+ if key is None or key in self . ignore or v is None :
+ continue
+ # log.debug(f"Processing tag {key} = {v}")
+ if key == "coordinates" :
+ if isinstance ( v , list ):
+ tags [ "lat" ] = v [ 1 ]
+ tags [ "lon" ] = v [ 0 ]
+ # poi = Point(float(lon), float(lat))
+ # tags["geometry"] = poi
+ continue
+
+ if key in self . types :
+ if self . types [ key ] == "select_multiple" :
+ # log.debug(f"Found key '{self.types[key]}'")
+ if v is None :
+ continue
+ vals = self . convertMultiple ( v )
+ if len ( vals ) > 0 :
+ tags . update ( vals )
+ continue
+ items = self . convertEntry ( key , v )
+ if items is None or len ( items ) == 0 :
+ continue
+
+ if type ( items ) == str :
+ log . debug ( f "string Item { items } " )
+ elif type ( items ) == list :
+ # log.debug(f"list Item {items}")
+ tags . update ( items [ 0 ])
+ elif type ( items ) == dict :
+ # log.debug(f"dict Item {items}")
+ tags . update ( items )
+ # log.debug(f"TAGS: {tags}")
+ if len ( tags ) > 0 :
+ total . append ( tags )
+
+ # log.debug(f"Finished parsing JSON file {filespec}")
+ return total
+
+
+
+
+
+
+
+
+
+
+
+
+
+ XMLparser
+
+
+
+
XMLparser ( filespec , data = None )
+
+
+
+
+
Import an ODK XML Instance file ito a data structure. The input is
+either a filespec to the Instance file copied off your phone, or
+the XML that has been read in elsewhere.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The filespec to the ODK XML Instance file
+
+
+
+ required
+
+
+
+ data
+
+ str
+
+
+
+
+
+ None
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
All the entries in the OSM XML Instance file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/parsers.py
+ 232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293
+294
+295
+296
+297
+298
+299
+300
+301 def XMLparser (
+ self ,
+ filespec : str ,
+ data : str = None ,
+) -> list :
+ """Import an ODK XML Instance file ito a data structure. The input is
+ either a filespec to the Instance file copied off your phone, or
+ the XML that has been read in elsewhere.
+
+ Args:
+ filespec (str): The filespec to the ODK XML Instance file
+ data (str): The XML data
+
+ Returns:
+ (list): All the entries in the OSM XML Instance file
+ """
+ row = dict ()
+ if filespec :
+ logging . info ( "Processing instance file: %s " % filespec )
+ file = open ( filespec , "rb" )
+ # Instances are small, read the whole file
+ xml = file . read ( os . path . getsize ( filespec ))
+ elif data :
+ xml = data
+ doc = xmltodict . parse ( xml )
+
+ json . dumps ( doc )
+ tags = dict ()
+ data = doc [ "data" ]
+ flattened = flatdict . FlatDict ( data )
+ # total = list()
+ # log.debug(f"FLAT: {flattened}")
+ pat = re . compile ( "[0-9.]* [0-9.-]* [0-9.]* [0-9.]*" )
+ for key , value in flattened . items ():
+ if key [ 0 ] == "@" or value is None :
+ continue
+ # Get the last element deliminated by a dash
+ # for CSV & JSON, or a colon for ODK XML.
+ base = basename ( key )
+ log . debug ( f "FLAT: { base } = { value } " )
+ if base in self . ignore :
+ continue
+ if re . search ( pat , value ):
+ gps = value . split ( " " )
+ row [ "lat" ] = gps [ 0 ]
+ row [ "lon" ] = gps [ 1 ]
+ continue
+
+ if base in self . types :
+ if self . types [ base ] == "select_multiple" :
+ # log.debug(f"Found key '{self.types[base]}'")
+ vals = self . convertMultiple ( value )
+ if len ( vals ) > 0 :
+ tags . update ( vals )
+ continue
+ else :
+ item = self . convertEntry ( base , value )
+ if item is None or len ( item ) == 0 :
+ continue
+ if len ( tags ) == 0 :
+ tags = item [ 0 ]
+ else :
+ if type ( item ) == list :
+ # log.debug(f"list Item {item}")
+ tags . update ( item [ 0 ])
+ elif type ( item ) == dict :
+ # log.debug(f"dict Item {item}")
+ tags . update ( item )
+ row . update ( tags )
+ return [ row ]
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/sqlite/index.html b/api/sqlite/index.html
new file mode 100644
index 000000000..1403313d0
--- /dev/null
+++ b/api/sqlite/index.html
@@ -0,0 +1,2984 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ sqlite - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+osmfile.py
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
OSM File output.
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ filespec
+
+ str
+
+
+
+
The input or output file
+
+
+
+ None
+
+
+
+ options
+
+ dict
+
+
+
+
+
+ None
+
+
+
+ outdir
+
+ str
+
+
+
+
The output directory for the file
+
+
+
+ '/tmp/'
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ OsmFile
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50
+51
+52
+53
+54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69
+70
+71
+72
+73
+74
+75
+76
+77
+78
+79
+80 def __init__ (
+ self ,
+ filespec : str = None ,
+ options : dict = None ,
+ outdir : str = "/tmp/" ,
+):
+ """This class reads and writes the OSM XML formated files.
+
+ Args:
+ filespec (str): The input or output file
+ options (dict): Command line options
+ outdir (str): The output directory for the file
+
+ Returns:
+ (OsmFile): An instance of this object
+ """
+ if options is None :
+ options = dict ()
+ self . options = options
+ # Read the config file to get our OSM credentials, if we have any
+ # self.config = config.config(self.options)
+ self . version = 3
+ self . visible = "true"
+ self . osmid = - 1
+ # Open the OSM output file
+ self . file = None
+ if filespec is not None :
+ self . file = open ( filespec , "w" )
+ # self.file = open(filespec + ".osm", 'w')
+ logging . info ( "Opened output file: " + filespec )
+ self . header ()
+ # logging.error("Couldn't open %s for writing!" % filespec)
+
+ # This is the file that contains all the filtering data
+ # self.ctable = convfile(self.options.get('convfile'))
+ # self.options['convfile'] = None
+ # These are for importing the CO addresses
+ self . full = None
+ self . addr = None
+ # decrement the ID
+ self . start = - 1
+ # path = xlsforms_path.replace("xlsforms", "")
+ self . convert = Convert ()
+ self . data = list ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ isclosed
+
+
+
+
+
+
+
+
Is the OSM XML file open or closed ?
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
The OSM XML file status
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 87
+88
+89
+90
+91
+92
+93 def isclosed ( self ):
+ """Is the OSM XML file open or closed ?
+
+ Returns:
+ (bool): The OSM XML file status
+ """
+ return self . file . closed
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Write the header of the OSM XML file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101 def header ( self ):
+ """Write the header of the OSM XML file."""
+ if self . file is not None :
+ self . file . write ( "<?xml version='1.0' encoding='UTF-8'?> \n " )
+ # self.file.write('<osm version="0.6" generator="osm-fieldowrk 0.3" timestamp="2017-03-13T21:43:02Z">\n')
+ self . file . write ( '<osm version="0.6" generator="osm-fieldwork 0.3"> \n ' )
+ self . file . flush ()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Write the footer of the OSM XML file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 103
+104
+105
+106
+107
+108
+109
+110
+111 def footer ( self ):
+ """Write the footer of the OSM XML file."""
+ # logging.debug("FIXME: %r" % self.file)
+ if self . file is not None :
+ self . file . write ( "</osm> \n " )
+ self . file . flush ()
+ if self . file is False :
+ self . file . close ()
+ self . file = None
+
+
+
+
+
+
+
+
+
+
+
+
+
+ write
+
+
+
+
+
+
+
+
Write the data to the OSM XML file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123 def write (
+ self ,
+ data = None ,
+):
+ """Write the data to the OSM XML file."""
+ if type ( data ) == list :
+ if data is not None :
+ for line in data :
+ self . file . write ( " %s \n " % line )
+ else :
+ self . file . write ( " %s \n " % data )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createWay
+
+
+
+
createWay ( way , modified = False )
+
+
+
+
+
This creates a string that is the OSM representation of a node.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ way
+
+ dict
+
+
+
+
The input way data structure
+
+
+
+ required
+
+
+
+ modified
+
+ bool
+
+
+
+
Is this a modified feature ?
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151
+152
+153
+154
+155
+156
+157
+158
+159
+160
+161
+162
+163
+164
+165
+166
+167
+168
+169
+170
+171
+172
+173
+174
+175
+176
+177
+178
+179
+180
+181
+182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220 def createWay (
+ self ,
+ way : dict ,
+ modified : bool = False ,
+):
+ """This creates a string that is the OSM representation of a node.
+
+ Args:
+ way (dict): The input way data structure
+ modified (bool): Is this a modified feature ?
+
+ Returns:
+ (str): The OSM XML entry
+ """
+ attrs = dict ()
+ osm = ""
+
+ # Add default attributes
+ if modified :
+ attrs [ "action" ] = "modify"
+ if "osm_way_id" in way [ "attrs" ]:
+ attrs [ "id" ] = int ( way [ "attrs" ][ "osm_way_id" ])
+ elif "osm_id" in way [ "attrs" ]:
+ attrs [ "id" ] = int ( way [ "attrs" ][ "osm_id" ])
+ elif "id" in way [ "attrs" ]:
+ attrs [ "id" ] = int ( way [ "attrs" ][ "id" ])
+ else :
+ attrs [ "id" ] = self . start
+ self . start -= 1
+ if "version" not in way [ "attrs" ]:
+ attrs [ "version" ] = 1
+ else :
+ attrs [ "version" ] = way [ "attrs" ][ "version" ]
+ attrs [ "timestamp" ] = datetime . now () . strftime ( "%Y-%m- %d T%TZ" )
+ # If the resulting file is publicly accessible without authentication, The GDPR applies
+ # and the identifying fields should not be included
+ if "uid" in way [ "attrs" ]:
+ attrs [ "uid" ] = way [ "attrs" ][ "uid" ]
+ if "user" in way [ "attrs" ]:
+ attrs [ "user" ] = way [ "attrs" ][ "user" ]
+
+ # Make all the nodes first. The data in the track has 4 fields. The first two
+ # are the lat/lon, then the altitude, and finally the GPS accuracy.
+ # newrefs = list()
+ node = dict ()
+ node [ "attrs" ] = dict ()
+ # The geometry is an EWKT string, so there is no need to get fancy with
+ # geometries, just manipulate the string, as OSM XML it's only strings
+ # anyway.
+ # geom = way['geom'][19:][:-2]
+ # print(geom)
+ # points = geom.split(",")
+ # print(points)
+
+ # epdb.st()
+ # loop = 0
+ # while loop < len(way['refs']):
+ # #print(f"{points[loop]} {way['refs'][loop]}")
+ # node['timestamp'] = attrs['timestamp']
+ # if 'user' in attrs and attrs['user'] is not None:
+ # node['attrs']['user'] = attrs['user']
+ # if 'uid' in attrs and attrs['uid'] is not None:
+ # node['attrs']['uid'] = attrs['uid']
+ # node['version'] = 0
+ # lat,lon = points[loop].split(' ')
+ # node['attrs']['lat'] = lat
+ # node['attrs']['lon'] = lon
+ # node['attrs']['id'] = way['refs'][loop]
+ # osm += self.createNode(node) + '\n'
+ # loop += 1
+
+ # Processs atrributes
+ line = ""
+ for ref , value in attrs . items ():
+ line += " %s = %r " % ( ref , str ( value ))
+ osm += " <way " + line + ">"
+
+ if "refs" in way :
+ for ref in way [ "refs" ]:
+ osm += ' \n <nd ref=" %s "/>' % ref
+ if "tags" in way :
+ for key , value in way [ "tags" ] . items ():
+ if value is None :
+ continue
+ if key == "track" :
+ continue
+ if key not in attrs :
+ newkey = escape ( key )
+ newval = escape ( str ( value ))
+ osm += f " \n <tag k=' { newkey } ' v=' { newval } '/>"
+ if modified :
+ osm += ' \n <tag k="note" v="Do not upload this without validation!"/>'
+ osm += " \n "
+ osm += " </way> \n "
+
+ return osm
+
+
+
+
+
+
+
+
+
+
+
+
+
+ featureToNode
+
+
+
+
+
+
+
+
Convert a GeoJson feature into the data structures used here.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ feature
+
+ dict
+
+
+
+
The GeoJson feature to convert
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The data structure used by this file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 222
+223
+224
+225
+226
+227
+228
+229
+230
+231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248 def featureToNode (
+ self ,
+ feature : dict ,
+):
+ """Convert a GeoJson feature into the data structures used here.
+
+ Args:
+ feature (dict): The GeoJson feature to convert
+
+ Returns:
+ (dict): The data structure used by this file
+ """
+ osm = dict ()
+ ignore = ( "label" , "title" )
+ tags = dict ()
+ attrs = dict ()
+ for tag , value in feature [ "properties" ] . items ():
+ if tag == "id" :
+ attrs [ "osm_id" ] = value
+ elif tag not in ignore :
+ tags [ tag ] = value
+ coords = feature [ "geometry" ][ "coordinates" ]
+ attrs [ "lat" ] = coords [ 1 ]
+ attrs [ "lon" ] = coords [ 0 ]
+ osm [ "attrs" ] = attrs
+ osm [ "tags" ] = tags
+ return osm
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createNode
+
+
+
+
createNode ( node , modified = False )
+
+
+
+
+
This creates a string that is the OSM representation of a node.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ node
+
+ dict
+
+
+
+
The input node data structure
+
+
+
+ required
+
+
+
+ modified
+
+ bool
+
+
+
+
Is this a modified feature ?
+
+
+
+ False
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262
+263
+264
+265
+266
+267
+268
+269
+270
+271
+272
+273
+274
+275
+276
+277
+278
+279
+280
+281
+282
+283
+284
+285
+286
+287
+288
+289
+290
+291
+292
+293
+294
+295
+296
+297
+298
+299
+300
+301
+302
+303
+304
+305
+306
+307
+308 def createNode (
+ self ,
+ node : dict ,
+ modified : bool = False ,
+):
+ """This creates a string that is the OSM representation of a node.
+
+ Args:
+ node (dict): The input node data structure
+ modified (bool): Is this a modified feature ?
+
+ Returns:
+ (str): The OSM XML entry
+ """
+ attrs = dict ()
+ # Add default attributes
+ if modified :
+ attrs [ "action" ] = "modify"
+
+ if "id" in node [ "attrs" ]:
+ attrs [ "id" ] = int ( node [ "attrs" ][ "id" ])
+ else :
+ attrs [ "id" ] = self . start
+ self . start -= 1
+ if "version" not in node [ "attrs" ]:
+ attrs [ "version" ] = "1"
+ else :
+ attrs [ "version" ] = int ( node [ "attrs" ][ "version" ]) + 1
+ attrs [ "lat" ] = node [ "attrs" ][ "lat" ]
+ attrs [ "lon" ] = node [ "attrs" ][ "lon" ]
+ attrs [ "timestamp" ] = datetime . now () . strftime ( "%Y-%m- %d T%TZ" )
+ # If the resulting file is publicly accessible without authentication, THE GDPR applies
+ # and the identifying fields should not be included
+ if "uid" in node [ "attrs" ]:
+ attrs [ "uid" ] = node [ "attrs" ][ "uid" ]
+ if "user" in node [ "attrs" ]:
+ attrs [ "user" ] = node [ "attrs" ][ "user" ]
+
+ # Processs atrributes
+ line = ""
+ osm = ""
+ for ref , value in attrs . items ():
+ line += " %s = %r " % ( ref , str ( value ))
+ osm += " <node " + line
+
+ if "tags" in node :
+ osm += ">"
+ for key , value in node [ "tags" ] . items ():
+ if not value :
+ continue
+ if key not in attrs :
+ newkey = escape ( key )
+ newval = escape ( str ( value ))
+ osm += f " \n <tag k=' { newkey } ' v=' { newval } '/>"
+ osm += " \n </node> \n "
+ else :
+ osm += "/>"
+
+ return osm
+
+
+
+
+
+
+
+
+
+
+
+
+
+ createTag
+
+
+
+
createTag ( field , value )
+
+
+
+
+
Create a data structure for an OSM feature tag.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ field
+
+ str
+
+
+
+
+
+ required
+
+
+
+ value
+
+ str
+
+
+
+
The value for the tag
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The newly created tag pair
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 310
+311
+312
+313
+314
+315
+316
+317
+318
+319
+320
+321
+322
+323
+324
+325
+326
+327
+328
+329
+330
+331
+332
+333
+334
+335
+336
+337 def createTag (
+ self ,
+ field : str ,
+ value : str ,
+):
+ """Create a data structure for an OSM feature tag.
+
+ Args:
+ field (str): The tag name
+ value (str): The value for the tag
+
+ Returns:
+ (dict): The newly created tag pair
+ """
+ newval = str ( value )
+ newval = newval . replace ( "&" , "and" )
+ newval = newval . replace ( '"' , "" )
+ tag = dict ()
+ # logging.debug("OSM:makeTag(field=%r, value=%r)" % (field, newval))
+
+ newtag = field
+ change = newval . split ( "=" )
+ if len ( change ) > 1 :
+ newtag = change [ 0 ]
+ newval = change [ 1 ]
+
+ tag [ newtag ] = newval
+ return tag
+
+
+
+
+
+
+
+
+
+
+
+
+
+ loadFile
+
+
+
+
+
+
+
+
Read a OSM XML file generated by osm_fieldwork.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ osmfile
+
+ str
+
+
+
+
The OSM XML file to load
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ list
+
+
+
+
The entries in the OSM XML file
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 339
+340
+341
+342
+343
+344
+345
+346
+347
+348
+349
+350
+351
+352
+353
+354
+355
+356
+357
+358
+359
+360
+361
+362
+363
+364
+365
+366
+367
+368
+369
+370
+371
+372
+373
+374
+375
+376
+377
+378
+379
+380
+381
+382
+383
+384
+385
+386
+387
+388
+389
+390
+391
+392
+393
+394
+395
+396
+397
+398
+399
+400
+401
+402
+403
+404
+405
+406
+407
+408
+409 def loadFile (
+ self ,
+ osmfile : str ,
+):
+ """Read a OSM XML file generated by osm_fieldwork.
+
+ Args:
+ osmfile (str): The OSM XML file to load
+
+ Returns:
+ (list): The entries in the OSM XML file
+ """
+ size = os . path . getsize ( osmfile )
+ with open ( osmfile , "r" ) as file :
+ xml = file . read ( size )
+ doc = xmltodict . parse ( xml )
+ if "osm" not in doc :
+ logging . warning ( "No data in this instance" )
+ return False
+ data = doc [ "osm" ]
+ if "node" not in data :
+ logging . warning ( "No nodes in this instance" )
+ return False
+
+ for node in data [ "node" ]:
+ attrs = {
+ "id" : int ( node [ "@id" ]),
+ "lat" : node [ "@lat" ][: 10 ],
+ "lon" : node [ "@lon" ][: 10 ],
+ }
+ if "@timestamp" in node :
+ attrs [ "timestamp" ] = node [ "@timestamp" ]
+
+ tags = dict ()
+ if "tag" in node :
+ for tag in node [ "tag" ]:
+ if type ( tag ) == dict :
+ tags [ tag [ "@k" ]] = tag [ "@v" ] . strip ()
+ # continue
+ else :
+ tags [ node [ "tag" ][ "@k" ]] = node [ "tag" ][ "@v" ] . strip ()
+ # continue
+ node = { "attrs" : attrs , "tags" : tags }
+ self . data . append ( node )
+
+ for way in data [ "way" ]:
+ attrs = {
+ "id" : int ( way [ "@id" ]),
+ }
+ refs = list ()
+ if len ( way [ "nd" ]) > 0 :
+ for ref in way [ "nd" ]:
+ refs . append ( int ( ref [ "@ref" ]))
+
+ if "@timestamp" in node :
+ attrs [ "timestamp" ] = node [ "@timestamp" ]
+
+ tags = dict ()
+ if "tag" in way :
+ for tag in way [ "tag" ]:
+ if type ( tag ) == dict :
+ tags [ tag [ "@k" ]] = tag [ "@v" ] . strip ()
+ # continue
+ else :
+ if len ( node [ "tags" ]) > 0 :
+ tags [ node [ "tags" ][ "@k" ]] = node [ "tags" ][ "@v" ] . strip ()
+ # continue
+ way = { "attrs" : attrs , "refs" : refs , "tags" : tags }
+ self . data . append ( way )
+
+ return self . data
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dump
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 411
+412
+413
+414
+415
+416
+417 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ for _id , item in self . data . items ():
+ for k , v in item [ "attrs" ] . items ():
+ print ( f " { k } = { v } " )
+ for k , v in item [ "tags" ] . items ():
+ print ( f " \t { k } = { v } " )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getFeature
+
+
+
+
+
+
+
+
Get the data for a feature from the loaded OSM data file.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ id
+
+ int
+
+
+
+
The ID to retrieve the feasture of
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ dict
+
+
+
+
The feature for this ID or None
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 419
+420
+421
+422
+423
+424
+425
+426
+427
+428
+429
+430
+431 def getFeature (
+ self ,
+ id : int ,
+):
+ """Get the data for a feature from the loaded OSM data file.
+
+ Args:
+ id (int): The ID to retrieve the feasture of
+
+ Returns:
+ (dict): The feature for this ID or None
+ """
+ return self . data [ id ]
+
+
+
+
+
+
+
+
+
+
+
+
+
+ getFields
+
+
+
+
+
+
+
+
Extract all the tags used in this file.
+
+
+ Source code in osm_fieldwork/osmfile.py
+ 433
+434
+435
+436
+437
+438
+439
+440 def getFields ( self ):
+ """Extract all the tags used in this file."""
+ fields = list ()
+ for _id , item in self . data . items ():
+ keys = list ( item [ "tags" ] . keys ())
+ for key in keys :
+ if key not in fields :
+ fields . append ( key )
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/update_xlsform/index.html b/api/update_xlsform/index.html
new file mode 100644
index 000000000..640ced9a5
--- /dev/null
+++ b/api/update_xlsform/index.html
@@ -0,0 +1,1515 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ update_xlsform - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Append mandatory fields to the XLSForm for use in FMTM.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ custom_form(BytesIO)
+
+
+
+
+
the XLSForm data uploaded, wrapped in BytesIO.
+
+
+
+ required
+
+
+
+ form_category(str)
+
+
+
+
+
the form category name (in form_title and description).
+
+
+
+ required
+
+
+
+ additional_entities(list[str])
+
+
+
+
+
add extra select_one_from_file fields to
+reference an additional Entity list (set of geometries).
+The values should be plural, so that 's' will be stripped in the
+field name.
+
+
+
+ required
+
+
+
+ existing_id(str)
+
+
+
+
+
an existing UUID to use for the form_id, else random uuid4.
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+Name Type
+ Description
+
+
+
+
+tuple
+ (str, BytesIO )
+
+
+
+
the xFormId + the update XLSForm wrapped in BytesIO.
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/update_xlsform.py
+ 182
+183
+184
+185
+186
+187
+188
+189
+190
+191
+192
+193
+194
+195
+196
+197
+198
+199
+200
+201
+202
+203
+204
+205
+206
+207
+208
+209
+210
+211
+212
+213
+214
+215
+216
+217
+218
+219
+220
+221
+222
+223
+224
+225
+226
+227
+228
+229
+230
+231
+232
+233
+234
+235
+236
+237
+238
+239
+240
+241
+242
+243
+244
+245
+246
+247
+248
+249
+250
+251
+252
+253
+254
+255
+256
+257
+258
+259
+260
+261
+262
+263
+264
+265
+266 async def append_mandatory_fields (
+ custom_form : BytesIO ,
+ form_category : str ,
+ additional_entities : list [ str ] = None ,
+ existing_id : str = None ,
+) -> tuple [ str , BytesIO ]:
+ """Append mandatory fields to the XLSForm for use in FMTM.
+
+ Args:
+ custom_form(BytesIO): the XLSForm data uploaded, wrapped in BytesIO.
+ form_category(str): the form category name (in form_title and description).
+ additional_entities(list[str]): add extra select_one_from_file fields to
+ reference an additional Entity list (set of geometries).
+ The values should be plural, so that 's' will be stripped in the
+ field name.
+ existing_id(str): an existing UUID to use for the form_id, else random uuid4.
+
+ Returns:
+ tuple(str, BytesIO): the xFormId + the update XLSForm wrapped in BytesIO.
+ """
+ log . info ( "Appending field mapping questions to XLSForm" )
+ custom_sheets = pd . read_excel ( custom_form , sheet_name = None , engine = "calamine" )
+ mandatory_sheets = pd . read_excel ( f " { xlsforms_path } /common/mandatory_fields.xls" , sheet_name = None , engine = "calamine" )
+ digitisation_sheets = pd . read_excel ( f " { xlsforms_path } /common/digitisation_fields.xls" , sheet_name = None , engine = "calamine" )
+
+ # Merge 'survey' and 'choices' sheets
+ if "survey" not in custom_sheets :
+ msg = "Survey sheet is required in XLSForm!"
+ log . error ( msg )
+ raise ValueError ( msg )
+ log . debug ( "Merging survey sheet XLSForm data" )
+ custom_sheets [ "survey" ] = merge_dataframes (
+ mandatory_sheets . get ( "survey" ), custom_sheets . get ( "survey" ), digitisation_sheets . get ( "survey" )
+ )
+ # Hardcode the form_category value for the start instructions
+ if form_category . endswith ( "s" ):
+ # Plural to singular
+ form_category = form_category [: - 1 ]
+ form_category_row = custom_sheets [ "survey" ] . loc [ custom_sheets [ "survey" ][ "name" ] == "form_category" ]
+ if not form_category_row . empty :
+ custom_sheets [ "survey" ] . loc [ custom_sheets [ "survey" ][ "name" ] == "form_category" , "calculation" ] = f "once(' { form_category } ')"
+
+ if "choices" not in custom_sheets :
+ msg = "Choices sheet is required in XLSForm!"
+ log . error ( msg )
+ raise ValueError ( msg )
+ log . debug ( "Merging choices sheet XLSForm data" )
+ custom_sheets [ "choices" ] = merge_dataframes (
+ mandatory_sheets . get ( "choices" ), custom_sheets . get ( "choices" ), digitisation_sheets . get ( "choices" )
+ )
+
+ # Append or overwrite 'entities' and 'settings' sheets
+ log . debug ( "Overwriting entities and settings XLSForm sheets" )
+ custom_sheets . update ({ key : mandatory_sheets [ key ] for key in [ "entities" , "settings" ] if key in mandatory_sheets })
+ if "entities" not in custom_sheets :
+ msg = "Entities sheet is required in XLSForm!"
+ log . error ( msg )
+ raise ValueError ( msg )
+ if "settings" not in custom_sheets :
+ msg = "Settings sheet is required in XLSForm!"
+ log . error ( msg )
+ raise ValueError ( msg )
+
+ # Set the 'version' column to the current timestamp (if 'version' column exists in 'settings')
+ xform_id = existing_id if existing_id else uuid4 ()
+ current_datetime = datetime . now () . strftime ( "%Y-%m- %d %H:%M:%S" )
+ log . debug ( f "Setting xFormId = { xform_id } | form title = { form_category } | version = { current_datetime } " )
+ custom_sheets [ "settings" ][ "version" ] = current_datetime
+ custom_sheets [ "settings" ][ "form_id" ] = xform_id
+ custom_sheets [ "settings" ][ "form_title" ] = form_category
+
+ # Append select_one_from_file for additional entities
+ if additional_entities :
+ log . debug ( "Adding additional entity list reference to XLSForm" )
+ for entity_name in additional_entities :
+ custom_sheets [ "survey" ] = append_select_one_from_file_row ( custom_sheets [ "survey" ], entity_name )
+
+ # Return spreadsheet wrapped as BytesIO memory object
+ output = BytesIO ()
+ with pd . ExcelWriter ( output , engine = "openpyxl" ) as writer :
+ for sheet_name , df in custom_sheets . items ():
+ df . to_excel ( writer , sheet_name = sheet_name , index = False )
+
+ output . seek ( 0 )
+ return ( xform_id , output )
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/api/yamlfile/index.html b/api/yamlfile/index.html
new file mode 100644
index 000000000..266e085b4
--- /dev/null
+++ b/api/yamlfile/index.html
@@ -0,0 +1,1957 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ yamlfile - osm-fieldwork
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+yamlfile.py
+
+
+
+
+
+
+
+
+
+ Bases: object
+
+
+
Config file in YAML format.
+
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ data
+
+ str
+
+
+
+
The filespec of the YAML file to read
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ YamlFile
+
+
+
+
An instance of this object
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/yamlfile.py
+ 34
+35
+36
+37
+38
+39
+40
+41
+42
+43
+44
+45
+46
+47
+48
+49
+50 def __init__ (
+ self ,
+ data : str ,
+):
+ """This parses a yaml file into a dictionary for easy access.
+
+ Args:
+ data (str): The filespec of the YAML file to read
+
+ Returns:
+ (YamlFile): An instance of this object
+ """
+ self . filespec = None
+ # if data == str:
+ self . filespec = data
+ self . file = open ( data , "rb" ) . read ()
+ self . yaml = yaml . load ( self . file , Loader = yaml . Loader )
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ privateData
+
+
+
+
+
+
+
+
See if a keyword is in the private data category.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Check to see if the keyword is in the private data section
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/yamlfile.py
+ 54
+55
+56
+57
+58
+59
+60
+61
+62
+63
+64
+65
+66
+67
+68
+69 def privateData (
+ self ,
+ keyword : str ,
+):
+ """See if a keyword is in the private data category.
+
+ Args:
+ keyword (str): The keyword to search for
+
+ Returns:
+ (bool): Check to see if the keyword is in the private data section
+ """
+ for value in self . yaml [ "private" ]:
+ if keyword . lower () in value :
+ return True
+ return False
+
+
+
+
+
+
+
+
+
+
+
+
+
+ ignoreData
+
+
+
+
+
+
+
+
See if a keyword is in the ignore data category.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Check to see if the keyword is in the ignore data section
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/yamlfile.py
+ 71
+72
+73
+74
+75
+76
+77
+78
+79
+80
+81
+82
+83
+84
+85
+86 def ignoreData (
+ self ,
+ keyword : str ,
+):
+ """See if a keyword is in the ignore data category.
+
+ Args:
+ keyword (str): The keyword to search for
+
+ Returns:
+ (bool): Check to see if the keyword is in the ignore data section
+ """
+ for value in self . yaml [ "ignore" ]:
+ if keyword . lower () in value :
+ return True
+ return False
+
+
+
+
+
+
+
+
+
+
+
+
+
+ convertData
+
+
+
+
+
+
+
+
See if a keyword is in the convert data category.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ keyword
+
+ str
+
+
+
+
The keyword to search for
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ bool
+
+
+
+
Check to see if the keyword is in the convert data section
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/yamlfile.py
+ 88
+ 89
+ 90
+ 91
+ 92
+ 93
+ 94
+ 95
+ 96
+ 97
+ 98
+ 99
+100
+101
+102
+103 def convertData (
+ self ,
+ keyword : str ,
+):
+ """See if a keyword is in the convert data category.
+
+ Args:
+ keyword (str): The keyword to search for
+
+ Returns:
+ (bool): Check to see if the keyword is in the convert data section
+ """
+ for value in self . yaml [ "convert" ]:
+ if keyword . lower () in value :
+ return True
+ return False
+
+
+
+
+
+
+
+
+
+
+
+
+
+ dump
+
+
+
+
+
+
+
+
Dump internal data structures, for debugging purposes only.
+
+
+ Source code in osm_fieldwork/yamlfile.py
+ 105
+106
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122 def dump ( self ):
+ """Dump internal data structures, for debugging purposes only."""
+ if self . filespec :
+ print ( "YAML file: %s " % self . filespec )
+ for key , values in self . yaml . items ():
+ print ( f "Key is: { key } " )
+ for v in values :
+ if type ( v ) == dict :
+ for k1 , v1 in v . items ():
+ if type ( v1 ) == list :
+ for item in v1 :
+ for i , j in item . items ():
+ print ( f " \t { i } = { j } " )
+ else :
+ print ( f " \t { k1 } = { v1 } " )
+ print ( "------------------" )
+ else :
+ print ( f " \t { v } " )
+
+
+
+
+
+
+
+
+
+
+
+
+
+ write
+
+
+
+
+
+
+
+
Add to the YAML file.
+
+
+
+
Parameters:
+
+
+
+ Name
+ Type
+ Description
+ Default
+
+
+
+
+ table
+
+ list
+
+
+
+
The name of the database table
+
+
+
+ required
+
+
+
+
+
+
+
+
Returns:
+
+
+
+ Type
+ Description
+
+
+
+
+
+ str
+
+
+
+
The modified YAML data
+
+
+
+
+
+
+
+ Source code in osm_fieldwork/yamlfile.py
+ 124
+125
+126
+127
+128
+129
+130
+131
+132
+133
+134
+135
+136
+137
+138
+139
+140
+141
+142
+143
+144
+145
+146
+147
+148
+149
+150
+151 def write (
+ self ,
+ table : list ,
+):
+ """Add to the YAML file.
+
+ Args:
+ table (list): The name of the database table
+
+ Returns:
+ (str): The modified YAML data
+ """
+ tab = " "
+ yaml = [ "select:" , f ' { tab } "osm_id": id' , f " { tab } tags:" ]
+ for item in where :
+ yaml . append ( f " { tab }{ tab } - { item } " )
+ yaml . append ( "from:" )
+ for item in table :
+ yaml . append ( f " { tab } - { item } " )
+ yaml . append ( "where:" )
+ yaml . append ( f " { tab } tags:" )
+ notnull = f " { tab }{ tab } - " + "{"
+ for item in where :
+ notnull += f " { item } : NOT NULL, "
+ notnull = f " { notnull [: - 2 ] } "
+ notnull += "}"
+ yaml . append ( f " { notnull } " )
+ return yaml
+
+
+
+
+
+
+
+
+
+
+
+
+
options:
+show_source: false
+heading_level: 3
+
+
+
+
+
+ Last update:
+ October 18, 2024
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/assets/_mkdocstrings.css b/assets/_mkdocstrings.css
new file mode 100644
index 000000000..049a254b9
--- /dev/null
+++ b/assets/_mkdocstrings.css
@@ -0,0 +1,64 @@
+
+/* Avoid breaking parameter names, etc. in table cells. */
+.doc-contents td code {
+ word-break: normal !important;
+}
+
+/* No line break before first paragraph of descriptions. */
+.doc-md-description,
+.doc-md-description>p:first-child {
+ display: inline;
+}
+
+/* Max width for docstring sections tables. */
+.doc .md-typeset__table,
+.doc .md-typeset__table table {
+ display: table !important;
+ width: 100%;
+}
+
+.doc .md-typeset__table tr {
+ display: table-row;
+}
+
+/* Defaults in Spacy table style. */
+.doc-param-default {
+ float: right;
+}
+
+/* Keep headings consistent. */
+h1.doc-heading,
+h2.doc-heading,
+h3.doc-heading,
+h4.doc-heading,
+h5.doc-heading,
+h6.doc-heading {
+ font-weight: 400;
+ line-height: 1.5;
+ color: inherit;
+ text-transform: none;
+}
+
+h1.doc-heading {
+ font-size: 1.6rem;
+}
+
+h2.doc-heading {
+ font-size: 1.2rem;
+}
+
+h3.doc-heading {
+ font-size: 1.15rem;
+}
+
+h4.doc-heading {
+ font-size: 1.10rem;
+}
+
+h5.doc-heading {
+ font-size: 1.05rem;
+}
+
+h6.doc-heading {
+ font-size: 1rem;
+}
\ No newline at end of file
diff --git a/assets/images/favicon.png b/assets/images/favicon.png
new file mode 100644
index 000000000..1cf13b9f9
Binary files /dev/null and b/assets/images/favicon.png differ
diff --git a/assets/javascripts/bundle.aecac24b.min.js b/assets/javascripts/bundle.aecac24b.min.js
new file mode 100644
index 000000000..464603d80
--- /dev/null
+++ b/assets/javascripts/bundle.aecac24b.min.js
@@ -0,0 +1,29 @@
+"use strict";(()=>{var wi=Object.create;var ur=Object.defineProperty;var Si=Object.getOwnPropertyDescriptor;var Ti=Object.getOwnPropertyNames,kt=Object.getOwnPropertySymbols,Oi=Object.getPrototypeOf,dr=Object.prototype.hasOwnProperty,Zr=Object.prototype.propertyIsEnumerable;var Xr=(e,t,r)=>t in e?ur(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,R=(e,t)=>{for(var r in t||(t={}))dr.call(t,r)&&Xr(e,r,t[r]);if(kt)for(var r of kt(t))Zr.call(t,r)&&Xr(e,r,t[r]);return e};var eo=(e,t)=>{var r={};for(var o in e)dr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&kt)for(var o of kt(e))t.indexOf(o)<0&&Zr.call(e,o)&&(r[o]=e[o]);return r};var hr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Mi=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Ti(t))!dr.call(e,n)&&n!==r&&ur(e,n,{get:()=>t[n],enumerable:!(o=Si(t,n))||o.enumerable});return e};var Ht=(e,t,r)=>(r=e!=null?wi(Oi(e)):{},Mi(t||!e||!e.__esModule?ur(r,"default",{value:e,enumerable:!0}):r,e));var ro=hr((br,to)=>{(function(e,t){typeof br=="object"&&typeof to!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(br,function(){"use strict";function e(r){var o=!0,n=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(C){return!!(C&&C!==document&&C.nodeName!=="HTML"&&C.nodeName!=="BODY"&&"classList"in C&&"contains"in C.classList)}function c(C){var it=C.type,Ne=C.tagName;return!!(Ne==="INPUT"&&s[it]&&!C.readOnly||Ne==="TEXTAREA"&&!C.readOnly||C.isContentEditable)}function p(C){C.classList.contains("focus-visible")||(C.classList.add("focus-visible"),C.setAttribute("data-focus-visible-added",""))}function l(C){C.hasAttribute("data-focus-visible-added")&&(C.classList.remove("focus-visible"),C.removeAttribute("data-focus-visible-added"))}function f(C){C.metaKey||C.altKey||C.ctrlKey||(a(r.activeElement)&&p(r.activeElement),o=!0)}function u(C){o=!1}function d(C){a(C.target)&&(o||c(C.target))&&p(C.target)}function v(C){a(C.target)&&(C.target.classList.contains("focus-visible")||C.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(C.target))}function b(C){document.visibilityState==="hidden"&&(n&&(o=!0),z())}function z(){document.addEventListener("mousemove",G),document.addEventListener("mousedown",G),document.addEventListener("mouseup",G),document.addEventListener("pointermove",G),document.addEventListener("pointerdown",G),document.addEventListener("pointerup",G),document.addEventListener("touchmove",G),document.addEventListener("touchstart",G),document.addEventListener("touchend",G)}function K(){document.removeEventListener("mousemove",G),document.removeEventListener("mousedown",G),document.removeEventListener("mouseup",G),document.removeEventListener("pointermove",G),document.removeEventListener("pointerdown",G),document.removeEventListener("pointerup",G),document.removeEventListener("touchmove",G),document.removeEventListener("touchstart",G),document.removeEventListener("touchend",G)}function G(C){C.target.nodeName&&C.target.nodeName.toLowerCase()==="html"||(o=!1,K())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",b,!0),z(),r.addEventListener("focus",d,!0),r.addEventListener("blur",v,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var Vr=hr((Ot,Dr)=>{/*!
+ * clipboard.js v2.0.11
+ * https://clipboardjs.com/
+ *
+ * Licensed MIT © Zeno Rocha
+ */(function(t,r){typeof Ot=="object"&&typeof Dr=="object"?Dr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Ot=="object"?Ot.ClipboardJS=r():t.ClipboardJS=r()})(Ot,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ei}});var s=i(279),a=i.n(s),c=i(370),p=i.n(c),l=i(817),f=i.n(l);function u(U){try{return document.execCommand(U)}catch(O){return!1}}var d=function(O){var S=f()(O);return u("cut"),S},v=d;function b(U){var O=document.documentElement.getAttribute("dir")==="rtl",S=document.createElement("textarea");S.style.fontSize="12pt",S.style.border="0",S.style.padding="0",S.style.margin="0",S.style.position="absolute",S.style[O?"right":"left"]="-9999px";var $=window.pageYOffset||document.documentElement.scrollTop;return S.style.top="".concat($,"px"),S.setAttribute("readonly",""),S.value=U,S}var z=function(O,S){var $=b(O);S.container.appendChild($);var F=f()($);return u("copy"),$.remove(),F},K=function(O){var S=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},$="";return typeof O=="string"?$=z(O,S):O instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(O==null?void 0:O.type)?$=z(O.value,S):($=f()(O),u("copy")),$},G=K;function C(U){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?C=function(S){return typeof S}:C=function(S){return S&&typeof Symbol=="function"&&S.constructor===Symbol&&S!==Symbol.prototype?"symbol":typeof S},C(U)}var it=function(){var O=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},S=O.action,$=S===void 0?"copy":S,F=O.container,Q=O.target,_e=O.text;if($!=="copy"&&$!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Q!==void 0)if(Q&&C(Q)==="object"&&Q.nodeType===1){if($==="copy"&&Q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if($==="cut"&&(Q.hasAttribute("readonly")||Q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(_e)return G(_e,{container:F});if(Q)return $==="cut"?v(Q):G(Q,{container:F})},Ne=it;function Pe(U){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Pe=function(S){return typeof S}:Pe=function(S){return S&&typeof Symbol=="function"&&S.constructor===Symbol&&S!==Symbol.prototype?"symbol":typeof S},Pe(U)}function ui(U,O){if(!(U instanceof O))throw new TypeError("Cannot call a class as a function")}function Jr(U,O){for(var S=0;S0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof F.action=="function"?F.action:this.defaultAction,this.target=typeof F.target=="function"?F.target:this.defaultTarget,this.text=typeof F.text=="function"?F.text:this.defaultText,this.container=Pe(F.container)==="object"?F.container:document.body}},{key:"listenClick",value:function(F){var Q=this;this.listener=p()(F,"click",function(_e){return Q.onClick(_e)})}},{key:"onClick",value:function(F){var Q=F.delegateTarget||F.currentTarget,_e=this.action(Q)||"copy",Ct=Ne({action:_e,container:this.container,target:this.target(Q),text:this.text(Q)});this.emit(Ct?"success":"error",{action:_e,text:Ct,trigger:Q,clearSelection:function(){Q&&Q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(F){return fr("action",F)}},{key:"defaultTarget",value:function(F){var Q=fr("target",F);if(Q)return document.querySelector(Q)}},{key:"defaultText",value:function(F){return fr("text",F)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(F){var Q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return G(F,Q)}},{key:"cut",value:function(F){return v(F)}},{key:"isSupported",value:function(){var F=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Q=typeof F=="string"?[F]:F,_e=!!document.queryCommandSupported;return Q.forEach(function(Ct){_e=_e&&!!document.queryCommandSupported(Ct)}),_e}}]),S}(a()),Ei=yi},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==n;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}o.exports=s},438:function(o,n,i){var s=i(828);function a(l,f,u,d,v){var b=p.apply(this,arguments);return l.addEventListener(u,b,v),{destroy:function(){l.removeEventListener(u,b,v)}}}function c(l,f,u,d,v){return typeof l.addEventListener=="function"?a.apply(null,arguments):typeof u=="function"?a.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(b){return a(b,f,u,d,v)}))}function p(l,f,u,d){return function(v){v.delegateTarget=s(v.target,f),v.delegateTarget&&d.call(l,v)}}o.exports=c},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(o,n,i){var s=i(879),a=i(438);function c(u,d,v){if(!u&&!d&&!v)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(v))throw new TypeError("Third argument must be a Function");if(s.node(u))return p(u,d,v);if(s.nodeList(u))return l(u,d,v);if(s.string(u))return f(u,d,v);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function p(u,d,v){return u.addEventListener(d,v),{destroy:function(){u.removeEventListener(d,v)}}}function l(u,d,v){return Array.prototype.forEach.call(u,function(b){b.addEventListener(d,v)}),{destroy:function(){Array.prototype.forEach.call(u,function(b){b.removeEventListener(d,v)})}}}function f(u,d,v){return a(document.body,u,d,v)}o.exports=c},817:function(o){function n(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),p=document.createRange();p.selectNodeContents(i),c.removeAllRanges(),c.addRange(p),s=c.toString()}return s}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function p(){c.off(i,p),s.apply(a,arguments)}return p._=s,this.on(i,p,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,p=a.length;for(c;c{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Ha=/["'&<>]/;Un.exports=$a;function $a(e){var t=""+e,r=Ha.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],s;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(a){s={error:a}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(s)throw s.error}}return i}function D(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||a(u,d)})})}function a(u,d){try{c(o[u](d))}catch(v){f(i[0][3],v)}}function c(u){u.value instanceof Ze?Promise.resolve(u.value.v).then(p,l):f(i[0][2],u)}function p(u){a("next",u)}function l(u){a("throw",u)}function f(u,d){u(d),i.shift(),i.length&&a(i[0][0],i[0][1])}}function io(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof we=="function"?we(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(s){return new Promise(function(a,c){s=e[i](s),n(a,c,s.done,s.value)})}}function n(i,s,a,c){Promise.resolve(c).then(function(p){i({value:p,done:a})},s)}}function k(e){return typeof e=="function"}function at(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var Rt=at(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription:
+`+r.map(function(o,n){return n+1+") "+o.toString()}).join(`
+ `):"",this.name="UnsubscriptionError",this.errors=r}});function De(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ie=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=we(s),c=a.next();!c.done;c=a.next()){var p=c.value;p.remove(this)}}catch(b){t={error:b}}finally{try{c&&!c.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var l=this.initialTeardown;if(k(l))try{l()}catch(b){i=b instanceof Rt?b.errors:[b]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=we(f),d=u.next();!d.done;d=u.next()){var v=d.value;try{ao(v)}catch(b){i=i!=null?i:[],b instanceof Rt?i=D(D([],N(i)),N(b.errors)):i.push(b)}}}catch(b){o={error:b}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new Rt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ao(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&De(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&De(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var gr=Ie.EMPTY;function Pt(e){return e instanceof Ie||e&&"closed"in e&&k(e.remove)&&k(e.add)&&k(e.unsubscribe)}function ao(e){k(e)?e():e.unsubscribe()}var Ae={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var st={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,s=n.isStopped,a=n.observers;return i||s?gr:(this.currentObservers=null,a.push(r),new Ie(function(){o.currentObservers=null,De(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,s=o.isStopped;n?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new P;return r.source=this,r},t.create=function(r,o){return new ho(r,o)},t}(P);var ho=function(e){ie(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:gr},t}(x);var yt={now:function(){return(yt.delegate||Date).now()},delegate:void 0};var Et=function(e){ie(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=yt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,s=o._infiniteTimeWindow,a=o._timestampProvider,c=o._windowTime;n||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,s=n._buffer,a=s.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=lt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var s=r.actions;o!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==o&&(lt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(jt);var go=function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(Wt);var Oe=new go(vo);var L=new P(function(e){return e.complete()});function Ut(e){return e&&k(e.schedule)}function Or(e){return e[e.length-1]}function Qe(e){return k(Or(e))?e.pop():void 0}function Me(e){return Ut(Or(e))?e.pop():void 0}function Nt(e,t){return typeof Or(e)=="number"?e.pop():t}var mt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Dt(e){return k(e==null?void 0:e.then)}function Vt(e){return k(e[pt])}function zt(e){return Symbol.asyncIterator&&k(e==null?void 0:e[Symbol.asyncIterator])}function qt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Pi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Kt=Pi();function Qt(e){return k(e==null?void 0:e[Kt])}function Yt(e){return no(this,arguments,function(){var r,o,n,i;return $t(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,Ze(r.read())];case 3:return o=s.sent(),n=o.value,i=o.done,i?[4,Ze(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,Ze(n)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Bt(e){return k(e==null?void 0:e.getReader)}function I(e){if(e instanceof P)return e;if(e!=null){if(Vt(e))return Ii(e);if(mt(e))return Fi(e);if(Dt(e))return ji(e);if(zt(e))return xo(e);if(Qt(e))return Wi(e);if(Bt(e))return Ui(e)}throw qt(e)}function Ii(e){return new P(function(t){var r=e[pt]();if(k(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Fi(e){return new P(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?M(function(n,i){return e(n,i,o)}):ue,xe(1),r?He(t):Io(function(){return new Jt}))}}function Fo(){for(var e=[],t=0;t=2,!0))}function le(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new x}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(p){var l,f,u,d=0,v=!1,b=!1,z=function(){f==null||f.unsubscribe(),f=void 0},K=function(){z(),l=u=void 0,v=b=!1},G=function(){var C=l;K(),C==null||C.unsubscribe()};return g(function(C,it){d++,!b&&!v&&z();var Ne=u=u!=null?u:r();it.add(function(){d--,d===0&&!b&&!v&&(f=Hr(G,c))}),Ne.subscribe(it),!l&&d>0&&(l=new tt({next:function(Pe){return Ne.next(Pe)},error:function(Pe){b=!0,z(),f=Hr(K,n,Pe),Ne.error(Pe)},complete:function(){v=!0,z(),f=Hr(K,s),Ne.complete()}}),I(C).subscribe(l))})(p)}}function Hr(e,t){for(var r=[],o=2;oe.next(document)),e}function q(e,t=document){return Array.from(t.querySelectorAll(e))}function W(e,t=document){let r=ce(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ce(e,t=document){return t.querySelector(e)||void 0}function Re(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}var na=_(h(document.body,"focusin"),h(document.body,"focusout")).pipe(ke(1),V(void 0),m(()=>Re()||document.body),J(1));function Zt(e){return na.pipe(m(t=>e.contains(t)),X())}function Je(e){return{x:e.offsetLeft,y:e.offsetTop}}function No(e){return _(h(window,"load"),h(window,"resize")).pipe(Ce(0,Oe),m(()=>Je(e)),V(Je(e)))}function er(e){return{x:e.scrollLeft,y:e.scrollTop}}function dt(e){return _(h(e,"scroll"),h(window,"resize")).pipe(Ce(0,Oe),m(()=>er(e)),V(er(e)))}function Do(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Do(e,r)}function T(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Do(o,n);return o}function tr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function ht(e){let t=T("script",{src:e});return H(()=>(document.head.appendChild(t),_(h(t,"load"),h(t,"error").pipe(E(()=>Mr(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),A(()=>document.head.removeChild(t)),xe(1))))}var Vo=new x,ia=H(()=>typeof ResizeObserver=="undefined"?ht("https://unpkg.com/resize-observer-polyfill"):j(void 0)).pipe(m(()=>new ResizeObserver(e=>{for(let t of e)Vo.next(t)})),E(e=>_(Ve,j(e)).pipe(A(()=>e.disconnect()))),J(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ye(e){return ia.pipe(w(t=>t.observe(e)),E(t=>Vo.pipe(M(({target:r})=>r===e),A(()=>t.unobserve(e)),m(()=>he(e)))),V(he(e)))}function bt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function zo(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var qo=new x,aa=H(()=>j(new IntersectionObserver(e=>{for(let t of e)qo.next(t)},{threshold:0}))).pipe(E(e=>_(Ve,j(e)).pipe(A(()=>e.disconnect()))),J(1));function rr(e){return aa.pipe(w(t=>t.observe(e)),E(t=>qo.pipe(M(({target:r})=>r===e),A(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Ko(e,t=16){return dt(e).pipe(m(({y:r})=>{let o=he(e),n=bt(e);return r>=n.height-o.height-t}),X())}var or={drawer:W("[data-md-toggle=drawer]"),search:W("[data-md-toggle=search]")};function Qo(e){return or[e].checked}function Ke(e,t){or[e].checked!==t&&or[e].click()}function We(e){let t=or[e];return h(t,"change").pipe(m(()=>t.checked),V(t.checked))}function sa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function ca(){return _(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(V(!1))}function Yo(){let e=h(window,"keydown").pipe(M(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:Qo("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),M(({mode:t,type:r})=>{if(t==="global"){let o=Re();if(typeof o!="undefined")return!sa(o,r)}return!0}),le());return ca().pipe(E(t=>t?L:e))}function pe(){return new URL(location.href)}function ot(e,t=!1){if(te("navigation.instant")&&!t){let r=T("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function Bo(){return new x}function Go(){return location.hash.slice(1)}function nr(e){let t=T("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function pa(e){return _(h(window,"hashchange"),e).pipe(m(Go),V(Go()),M(t=>t.length>0),J(1))}function Jo(e){return pa(e).pipe(m(t=>ce(`[id="${t}"]`)),M(t=>typeof t!="undefined"))}function Fr(e){let t=matchMedia(e);return Xt(r=>t.addListener(()=>r(t.matches))).pipe(V(t.matches))}function Xo(){let e=matchMedia("print");return _(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(V(e.matches))}function jr(e,t){return e.pipe(E(r=>r?t():L))}function ir(e,t){return new P(r=>{let o=new XMLHttpRequest;o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network Error"))}),o.addEventListener("abort",()=>{r.error(new Error("Request aborted"))}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{t.progress$.next(n.loaded/n.total*100)}),t.progress$.next(5)),o.send()})}function Ue(e,t){return ir(e,t).pipe(E(r=>r.text()),m(r=>JSON.parse(r)),J(1))}function Zo(e,t){let r=new DOMParser;return ir(e,t).pipe(E(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),J(1))}function en(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function tn(){return _(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(en),V(en()))}function rn(){return{width:innerWidth,height:innerHeight}}function on(){return h(window,"resize",{passive:!0}).pipe(m(rn),V(rn()))}function nn(){return B([tn(),on()]).pipe(m(([e,t])=>({offset:e,size:t})),J(1))}function ar(e,{viewport$:t,header$:r}){let o=t.pipe(ee("size")),n=B([o,r]).pipe(m(()=>Je(e)));return B([r,t,n]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:p}])=>({offset:{x:s.x-c,y:s.y-p+i},size:a})))}function la(e){return h(e,"message",t=>t.data)}function ma(e){let t=new x;return t.subscribe(r=>e.postMessage(r)),t}function an(e,t=new Worker(e)){let r=la(t),o=ma(t),n=new x;n.subscribe(o);let i=o.pipe(Z(),re(!0));return n.pipe(Z(),qe(r.pipe(Y(i))),le())}var fa=W("#__config"),vt=JSON.parse(fa.textContent);vt.base=`${new URL(vt.base,pe())}`;function me(){return vt}function te(e){return vt.features.includes(e)}function be(e,t){return typeof t!="undefined"?vt.translations[e].replace("#",t.toString()):vt.translations[e]}function Ee(e,t=document){return W(`[data-md-component=${e}]`,t)}function oe(e,t=document){return q(`[data-md-component=${e}]`,t)}function ua(e){let t=W(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>W(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function sn(e){if(!te("announce.dismiss")||!e.childElementCount)return L;if(!e.hidden){let t=W(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return H(()=>{let t=new x;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ua(e).pipe(w(r=>t.next(r)),A(()=>t.complete()),m(r=>R({ref:e},r)))})}function da(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function cn(e,t){let r=new x;return r.subscribe(({hidden:o})=>{e.hidden=o}),da(e,t).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))}function ha(e,t){let r=H(()=>B([No(e),dt(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:s,height:a}=he(e);return{x:o-i.x+s/2,y:n-i.y+a/2}}));return Zt(e).pipe(E(o=>r.pipe(m(n=>({active:o,offset:n})),xe(+!o||1/0))))}function pn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return H(()=>{let i=new x,s=i.pipe(Z(),re(!0));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),rr(e).pipe(Y(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),_(i.pipe(M(({active:a})=>a)),i.pipe(ke(250),M(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Ce(16,Oe)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(Pr(125,Oe),M(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(Y(s),M(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>{a.stopPropagation(),a.preventDefault()}),h(n,"mousedown").pipe(Y(s),ne(i)).subscribe(([a,{active:c}])=>{var p;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(p=Re())==null||p.blur()}}),r.pipe(Y(s),M(a=>a===o),ze(125)).subscribe(()=>e.focus()),ha(e,t).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))})}function Wr(e){return T("div",{class:"md-tooltip",id:e},T("div",{class:"md-tooltip__inner md-typeset"}))}function ln(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return T("aside",{class:"md-annotation",tabIndex:0},Wr(t),T("a",{href:r,class:"md-annotation__index",tabIndex:-1},T("span",{"data-md-annotation-id":e})))}else return T("aside",{class:"md-annotation",tabIndex:0},Wr(t),T("span",{class:"md-annotation__index",tabIndex:-1},T("span",{"data-md-annotation-id":e})))}function mn(e){return T("button",{class:"md-clipboard md-icon",title:be("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Ur(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(c=>!e.terms[c]).reduce((c,p)=>[...c,T("del",null,p)," "],[]).slice(0,-1),i=me(),s=new URL(e.location,i.base);te("search.highlight")&&s.searchParams.set("h",Object.entries(e.terms).filter(([,c])=>c).reduce((c,[p])=>`${c} ${p}`.trim(),""));let{tags:a}=me();return T("a",{href:`${s}`,class:"md-search-result__link",tabIndex:-1},T("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&T("div",{class:"md-search-result__icon md-icon"}),r>0&&T("h1",null,e.title),r<=0&&T("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&e.tags.map(c=>{let p=a?c in a?`md-tag-icon md-tag--${a[c]}`:"md-tag-icon":"";return T("span",{class:`md-tag ${p}`},c)}),o>0&&n.length>0&&T("p",{class:"md-search-result__terms"},be("search.result.term.missing"),": ",...n)))}function fn(e){let t=e[0].score,r=[...e],o=me(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),s=r.findIndex(l=>l.scoreUr(l,1)),...c.length?[T("details",{class:"md-search-result__more"},T("summary",{tabIndex:-1},T("div",null,c.length>0&&c.length===1?be("search.result.more.one"):be("search.result.more.other",c.length))),...c.map(l=>Ur(l,1)))]:[]];return T("li",{class:"md-search-result__item"},p)}function un(e){return T("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>T("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?tr(r):r)))}function Nr(e){let t=`tabbed-control tabbed-control--${e}`;return T("div",{class:t,hidden:!0},T("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function dn(e){return T("div",{class:"md-typeset__scrollwrap"},T("div",{class:"md-typeset__table"},e))}function ba(e){let t=me(),r=new URL(`../${e.version}/`,t.base);return T("li",{class:"md-version__item"},T("a",{href:`${r}`,class:"md-version__link"},e.title))}function hn(e,t){return T("div",{class:"md-version"},T("button",{class:"md-version__current","aria-label":be("select.version")},t.title),T("ul",{class:"md-version__list"},e.map(ba)))}function va(e){return e.tagName==="CODE"?q(".c, .c1, .cm",e):[e]}function ga(e){let t=[];for(let r of va(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let p=i.splitText(s.index);i=p.splitText(a.length),t.push(p)}else{i.textContent=a,t.push(i);break}}}}return t}function bn(e,t){t.append(...Array.from(e.childNodes))}function sr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,s=new Map;for(let a of ga(t)){let[,c]=a.textContent.match(/\((\d+)\)/);ce(`:scope > li:nth-child(${c})`,e)&&(s.set(c,ln(c,i)),a.replaceWith(s.get(c)))}return s.size===0?L:H(()=>{let a=new x,c=a.pipe(Z(),re(!0)),p=[];for(let[l,f]of s)p.push([W(".md-typeset",f),W(`:scope > li:nth-child(${l})`,e)]);return o.pipe(Y(c)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of p)l?bn(f,u):bn(u,f)}),_(...[...s].map(([,l])=>pn(l,t,{target$:r}))).pipe(A(()=>a.complete()),le())})}function vn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return vn(t)}}function gn(e,t){return H(()=>{let r=vn(e);return typeof r!="undefined"?sr(r,e,t):L})}var yn=Ht(Vr());var xa=0;function En(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return En(t)}}function xn(e){return ye(e).pipe(m(({width:t})=>({scrollable:bt(e).width>t})),ee("scrollable"))}function wn(e,t){let{matches:r}=matchMedia("(hover)"),o=H(()=>{let n=new x;if(n.subscribe(({scrollable:s})=>{s&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")}),yn.default.isSupported()&&(e.closest(".copy")||te("content.code.copy")&&!e.closest(".no-copy"))){let s=e.closest("pre");s.id=`__code_${xa++}`,s.insertBefore(mn(s.id),e)}let i=e.closest(".highlight");if(i instanceof HTMLElement){let s=En(i);if(typeof s!="undefined"&&(i.classList.contains("annotate")||te("content.code.annotate"))){let a=sr(s,e,t);return xn(e).pipe(w(c=>n.next(c)),A(()=>n.complete()),m(c=>R({ref:e},c)),qe(ye(i).pipe(m(({width:c,height:p})=>c&&p),X(),E(c=>c?a:L))))}}return xn(e).pipe(w(s=>n.next(s)),A(()=>n.complete()),m(s=>R({ref:e},s)))});return te("content.lazy")?rr(e).pipe(M(n=>n),xe(1),E(()=>o)):o}function ya(e,{target$:t,print$:r}){let o=!0;return _(t.pipe(m(n=>n.closest("details:not([open])")),M(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(M(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Sn(e,t){return H(()=>{let r=new x;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),ya(e,t).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}var Tn=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var zr,wa=0;function Sa(){return typeof mermaid=="undefined"||mermaid instanceof Element?ht("https://unpkg.com/mermaid@9.4.3/dist/mermaid.min.js"):j(void 0)}function On(e){return e.classList.remove("mermaid"),zr||(zr=Sa().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Tn,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),J(1))),zr.subscribe(()=>{e.classList.add("mermaid");let t=`__mermaid_${wa++}`,r=T("div",{class:"mermaid"}),o=e.textContent;mermaid.mermaidAPI.render(t,o,(n,i)=>{let s=r.attachShadow({mode:"closed"});s.innerHTML=n,e.replaceWith(r),i==null||i(s)})}),zr.pipe(m(()=>({ref:e})))}var Mn=T("table");function Ln(e){return e.replaceWith(Mn),Mn.replaceWith(dn(e)),j({ref:e})}function Ta(e){let t=q(":scope > input",e),r=t.find(o=>o.checked)||t[0];return _(...t.map(o=>h(o,"change").pipe(m(()=>W(`label[for="${o.id}"]`))))).pipe(V(W(`label[for="${r.id}"]`)),m(o=>({active:o})))}function _n(e,{viewport$:t}){let r=Nr("prev");e.append(r);let o=Nr("next");e.append(o);let n=W(".tabbed-labels",e);return H(()=>{let i=new x,s=i.pipe(Z(),re(!0));return B([i,ye(e)]).pipe(Ce(1,Oe),Y(s)).subscribe({next([{active:a},c]){let p=Je(a),{width:l}=he(a);e.style.setProperty("--md-indicator-x",`${p.x}px`),e.style.setProperty("--md-indicator-width",`${l}px`);let f=er(n);(p.xf.x+c.width)&&n.scrollTo({left:Math.max(0,p.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),B([dt(n),ye(n)]).pipe(Y(s)).subscribe(([a,c])=>{let p=bt(n);r.hidden=a.x<16,o.hidden=a.x>p.width-c.width-16}),_(h(r,"click").pipe(m(()=>-1)),h(o,"click").pipe(m(()=>1))).pipe(Y(s)).subscribe(a=>{let{width:c}=he(n);n.scrollBy({left:c*a,behavior:"smooth"})}),te("content.tabs.link")&&i.pipe(je(1),ne(t)).subscribe(([{active:a},{offset:c}])=>{let p=a.innerText.trim();if(a.hasAttribute("data-md-switching"))a.removeAttribute("data-md-switching");else{let l=e.offsetTop-c.y;for(let u of q("[data-tabs]"))for(let d of q(":scope > input",u)){let v=W(`label[for="${d.id}"]`);if(v!==a&&v.innerText.trim()===p){v.setAttribute("data-md-switching",""),d.click();break}}window.scrollTo({top:e.offsetTop-l});let f=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([p,...f])])}}),i.pipe(Y(s)).subscribe(()=>{for(let a of q("audio, video",e))a.pause()}),Ta(e).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))}).pipe(rt(ae))}function An(e,{viewport$:t,target$:r,print$:o}){return _(...q(".annotate:not(.highlight)",e).map(n=>gn(n,{target$:r,print$:o})),...q("pre:not(.mermaid) > code",e).map(n=>wn(n,{target$:r,print$:o})),...q("pre.mermaid",e).map(n=>On(n)),...q("table:not([class])",e).map(n=>Ln(n)),...q("details",e).map(n=>Sn(n,{target$:r,print$:o})),...q("[data-tabs]",e).map(n=>_n(n,{viewport$:t})))}function Oa(e,{alert$:t}){return t.pipe(E(r=>_(j(!0),j(!1).pipe(ze(2e3))).pipe(m(o=>({message:r,active:o})))))}function Cn(e,t){let r=W(".md-typeset",e);return H(()=>{let o=new x;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Oa(e,t).pipe(w(n=>o.next(n)),A(()=>o.complete()),m(n=>R({ref:e},n)))})}function Ma({viewport$:e}){if(!te("header.autohide"))return j(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Le(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),X()),o=We("search");return B([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),X(),E(n=>n?r:j(!1)),V(!1))}function kn(e,t){return H(()=>B([ye(e),Ma(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),X((r,o)=>r.height===o.height&&r.hidden===o.hidden),J(1))}function Hn(e,{header$:t,main$:r}){return H(()=>{let o=new x,n=o.pipe(Z(),re(!0));return o.pipe(ee("active"),Ge(t)).subscribe(([{active:i},{hidden:s}])=>{e.classList.toggle("md-header--shadow",i&&!s),e.hidden=s}),r.subscribe(o),t.pipe(Y(n),m(i=>R({ref:e},i)))})}function La(e,{viewport$:t,header$:r}){return ar(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=he(e);return{active:o>=n}}),ee("active"))}function $n(e,t){return H(()=>{let r=new x;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=ce(".md-content h1");return typeof o=="undefined"?L:La(o,t).pipe(w(n=>r.next(n)),A(()=>r.complete()),m(n=>R({ref:e},n)))})}function Rn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),X()),n=o.pipe(E(()=>ye(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),ee("bottom"))));return B([o,n,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:p}}])=>(p=Math.max(0,p-Math.max(0,s-c,i)-Math.max(0,p+c-a)),{offset:s-i,height:p,active:s-i<=c})),X((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function _a(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return j(...e).pipe(se(r=>h(r,"change").pipe(m(()=>r))),V(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),J(1))}function Pn(e){let t=T("meta",{name:"theme-color"});document.head.appendChild(t);let r=T("meta",{name:"color-scheme"});return document.head.appendChild(r),H(()=>{let o=new x;o.subscribe(i=>{document.body.setAttribute("data-md-color-switching","");for(let[s,a]of Object.entries(i.color))document.body.setAttribute(`data-md-color-${s}`,a);for(let s=0;s{let i=Ee("header"),s=window.getComputedStyle(i);return r.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(a=>(+a).toString(16).padStart(2,"0")).join("")})).subscribe(i=>t.content=`#${i}`),o.pipe(Se(ae)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")});let n=q("input",e);return _a(n).pipe(w(i=>o.next(i)),A(()=>o.complete()),m(i=>R({ref:e},i)))})}function In(e,{progress$:t}){return H(()=>{let r=new x;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),A(()=>r.complete()),m(o=>({ref:e,value:o})))})}var qr=Ht(Vr());function Aa(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r}function Fn({alert$:e}){qr.default.isSupported()&&new P(t=>{new qr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||Aa(W(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>be("clipboard.copied"))).subscribe(e)}function Ca(e){if(e.length<2)return[""];let[t,r]=[...e].sort((n,i)=>n.length-i.length).map(n=>n.replace(/[^/]+$/,"")),o=0;if(t===r)o=t.length;else for(;t.charCodeAt(o)===r.charCodeAt(o);)o++;return e.map(n=>n.replace(t.slice(0,o),""))}function cr(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return j(t);{let r=me();return Zo(new URL("sitemap.xml",e||r.base)).pipe(m(o=>Ca(q("loc",o).map(n=>n.textContent))),de(()=>L),He([]),w(o=>__md_set("__sitemap",o,sessionStorage,e)))}}function jn(e){let t=W("[rel=canonical]",e);t.href=t.href.replace("//localhost:","//127.0.0.1");let r=new Map;for(let o of q(":scope > *",e)){let n=o.outerHTML;for(let i of["href","src"]){let s=o.getAttribute(i);if(s===null)continue;let a=new URL(s,t.href),c=o.cloneNode();c.setAttribute(i,`${a}`),n=c.outerHTML;break}r.set(n,o)}return r}function Wn({location$:e,viewport$:t,progress$:r}){let o=me();if(location.protocol==="file:")return L;let n=cr().pipe(m(l=>l.map(f=>`${new URL(f,o.base)}`))),i=h(document.body,"click").pipe(ne(n),E(([l,f])=>{if(!(l.target instanceof Element))return L;let u=l.target.closest("a");if(u===null)return L;if(u.target||l.metaKey||l.ctrlKey)return L;let d=new URL(u.href);return d.search=d.hash="",f.includes(`${d}`)?(l.preventDefault(),j(new URL(u.href))):L}),le());i.pipe(xe(1)).subscribe(()=>{let l=ce("link[rel=icon]");typeof l!="undefined"&&(l.href=l.href)}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),i.pipe(ne(t)).subscribe(([l,{offset:f}])=>{history.scrollRestoration="manual",history.replaceState(f,""),history.pushState(null,"",l)}),i.subscribe(e);let s=e.pipe(V(pe()),ee("pathname"),je(1),E(l=>ir(l,{progress$:r}).pipe(de(()=>(ot(l,!0),L))))),a=new DOMParser,c=s.pipe(E(l=>l.text()),E(l=>{let f=a.parseFromString(l,"text/html");for(let b of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...te("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let z=ce(b),K=ce(b,f);typeof z!="undefined"&&typeof K!="undefined"&&z.replaceWith(K)}let u=jn(document.head),d=jn(f.head);for(let[b,z]of d)z.getAttribute("rel")==="stylesheet"||z.hasAttribute("src")||(u.has(b)?u.delete(b):document.head.appendChild(z));for(let b of u.values())b.getAttribute("rel")==="stylesheet"||b.hasAttribute("src")||b.remove();let v=Ee("container");return Fe(q("script",v)).pipe(E(b=>{let z=f.createElement("script");if(b.src){for(let K of b.getAttributeNames())z.setAttribute(K,b.getAttribute(K));return b.replaceWith(z),new P(K=>{z.onload=()=>K.complete()})}else return z.textContent=b.textContent,b.replaceWith(z),L}),Z(),re(f))}),le());return h(window,"popstate").pipe(m(pe)).subscribe(e),e.pipe(V(pe()),Le(2,1),M(([l,f])=>l.pathname===f.pathname&&l.hash!==f.hash),m(([,l])=>l)).subscribe(l=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):(history.scrollRestoration="auto",nr(l.hash),history.scrollRestoration="manual")}),e.pipe(Cr(i),V(pe()),Le(2,1),M(([l,f])=>l.pathname===f.pathname&&l.hash===f.hash),m(([,l])=>l)).subscribe(l=>{history.scrollRestoration="auto",nr(l.hash),history.scrollRestoration="manual",history.back()}),c.pipe(ne(e)).subscribe(([,l])=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):nr(l.hash)}),t.pipe(ee("offset"),ke(100)).subscribe(({offset:l})=>{history.replaceState(l,"")}),c}var Dn=Ht(Nn());function Vn(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,s)=>`${i}${s} `;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(0,Dn.default)(s).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function Mt(e){return e.type===1}function pr(e){return e.type===3}function zn(e,t){let r=an(e);return _(j(location.protocol!=="file:"),We("search")).pipe($e(o=>o),E(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:te("search.suggest")}}})),r}function qn({document$:e}){let t=me(),r=Ue(new URL("../versions.json",t.base)).pipe(de(()=>L)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:s,aliases:a})=>s===i||a.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),E(n=>h(document.body,"click").pipe(M(i=>!i.metaKey&&!i.ctrlKey),ne(o),E(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&n.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&n.get(c)===s?L:(i.preventDefault(),j(c))}}return L}),E(i=>{let{version:s}=n.get(i);return cr(new URL(i)).pipe(m(a=>{let p=pe().href.replace(t.base,"");return a.includes(p.split("#")[0])?new URL(`../${s}/${p}`,t.base):new URL(i)}))})))).subscribe(n=>ot(n,!0)),B([r,o]).subscribe(([n,i])=>{W(".md-header__topic").appendChild(hn(n,i))}),e.pipe(E(()=>o)).subscribe(n=>{var s;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let a=((s=t.version)==null?void 0:s.default)||"latest";Array.isArray(a)||(a=[a]);e:for(let c of a)for(let p of n.aliases)if(new RegExp(c,"i").test(p)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let a of oe("outdated"))a.hidden=!1})}function Pa(e,{worker$:t}){let{searchParams:r}=pe();r.has("q")&&(Ke("search",!0),e.value=r.get("q"),e.focus(),We("search").pipe($e(i=>!i)).subscribe(()=>{let i=pe();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=Zt(e),n=_(t.pipe($e(Mt)),h(e,"keyup"),o).pipe(m(()=>e.value),X());return B([n,o]).pipe(m(([i,s])=>({value:i,focus:s})),J(1))}function Kn(e,{worker$:t}){let r=new x,o=r.pipe(Z(),re(!0));B([t.pipe($e(Mt)),r],(i,s)=>s).pipe(ee("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(ee("focus")).subscribe(({focus:i})=>{i&&Ke("search",i)}),h(e.form,"reset").pipe(Y(o)).subscribe(()=>e.focus());let n=W("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),Pa(e,{worker$:t}).pipe(w(i=>r.next(i)),A(()=>r.complete()),m(i=>R({ref:e},i)),J(1))}function Qn(e,{worker$:t,query$:r}){let o=new x,n=Ko(e.parentElement).pipe(M(Boolean)),i=e.parentElement,s=W(":scope > :first-child",e),a=W(":scope > :last-child",e);We("search").subscribe(l=>a.setAttribute("role",l?"list":"presentation")),o.pipe(ne(r),$r(t.pipe($e(Mt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:s.textContent=f.length?be("search.result.none"):be("search.result.placeholder");break;case 1:s.textContent=be("search.result.one");break;default:let u=tr(l.length);s.textContent=be("search.result.other",u)}});let c=o.pipe(w(()=>a.innerHTML=""),E(({items:l})=>_(j(...l.slice(0,10)),j(...l.slice(10)).pipe(Le(4),Ir(n),E(([f])=>f)))),m(fn),le());return c.subscribe(l=>a.appendChild(l)),c.pipe(se(l=>{let f=ce("details",l);return typeof f=="undefined"?L:h(f,"toggle").pipe(Y(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(M(pr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),A(()=>o.complete()),m(l=>R({ref:e},l)))}function Ia(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=pe();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function Yn(e,t){let r=new x,o=r.pipe(Z(),re(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(Y(o)).subscribe(n=>n.preventDefault()),Ia(e,t).pipe(w(n=>r.next(n)),A(()=>r.complete()),m(n=>R({ref:e},n)))}function Bn(e,{worker$:t,keyboard$:r}){let o=new x,n=Ee("search-query"),i=_(h(n,"keydown"),h(n,"focus")).pipe(Se(ae),m(()=>n.value),X());return o.pipe(Ge(i),m(([{suggest:a},c])=>{let p=c.split(/([\s-]+)/);if(a!=null&&a.length&&p[p.length-1]){let l=a[a.length-1];l.startsWith(p[p.length-1])&&(p[p.length-1]=l)}else p.length=0;return p})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(M(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(M(pr),m(({data:a})=>a)).pipe(w(a=>o.next(a)),A(()=>o.complete()),m(()=>({ref:e})))}function Gn(e,{index$:t,keyboard$:r}){let o=me();try{let n=zn(o.search,t),i=Ee("search-query",e),s=Ee("search-result",e);h(e,"click").pipe(M(({target:c})=>c instanceof Element&&!!c.closest("a"))).subscribe(()=>Ke("search",!1)),r.pipe(M(({mode:c})=>c==="search")).subscribe(c=>{let p=Re();switch(c.type){case"Enter":if(p===i){let l=new Map;for(let f of q(":first-child [href]",s)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}c.claim()}break;case"Escape":case"Tab":Ke("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof p=="undefined")i.focus();else{let l=[i,...q(":not(details) > [href], summary, details[open] [href]",s)],f=Math.max(0,(Math.max(0,l.indexOf(p))+l.length+(c.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}c.claim();break;default:i!==Re()&&i.focus()}}),r.pipe(M(({mode:c})=>c==="global")).subscribe(c=>{switch(c.type){case"f":case"s":case"/":i.focus(),i.select(),c.claim();break}});let a=Kn(i,{worker$:n});return _(a,Qn(s,{worker$:n,query$:a})).pipe(qe(...oe("search-share",e).map(c=>Yn(c,{query$:a})),...oe("search-suggest",e).map(c=>Bn(c,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ve}}function Jn(e,{index$:t,location$:r}){return B([t,r.pipe(V(pe()),M(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>Vn(o.config)(n.searchParams.get("h"))),m(o=>{var s;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,p=o(c);p.length>c.length&&n.set(a,p)}for(let[a,c]of n){let{childNodes:p}=T("span",null,c);a.replaceWith(...Array.from(p))}return{ref:e,nodes:n}}))}function Fa(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return B([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(n,Math.max(0,a-i))-n,{height:s,locked:a>=i+n})),X((i,s)=>i.height===s.height&&i.locked===s.locked))}function Kr(e,o){var n=o,{header$:t}=n,r=eo(n,["header$"]);let i=W(".md-sidebar__scrollwrap",e),{y:s}=Je(i);return H(()=>{let a=new x,c=a.pipe(Z(),re(!0)),p=a.pipe(Ce(0,Oe));return p.pipe(ne(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),p.pipe($e()).subscribe(()=>{for(let l of q(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=he(f);f.scrollTo({top:u-d/2})}}}),ge(q("label[tabindex]",e)).pipe(se(l=>h(l,"click").pipe(Se(ae),m(()=>l),Y(c)))).subscribe(l=>{let f=W(`[id="${l.htmlFor}"]`);W(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),Fa(e,r).pipe(w(l=>a.next(l)),A(()=>a.complete()),m(l=>R({ref:e},l)))})}function Xn(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return St(Ue(`${r}/releases/latest`).pipe(de(()=>L),m(o=>({version:o.tag_name})),He({})),Ue(r).pipe(de(()=>L),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),He({}))).pipe(m(([o,n])=>R(R({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return Ue(r).pipe(m(o=>({repositories:o.public_repos})),He({}))}}function Zn(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return Ue(r).pipe(de(()=>L),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),He({}))}function ei(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return Xn(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return Zn(r,o)}return L}var ja;function Wa(e){return ja||(ja=H(()=>{let t=__md_get("__source",sessionStorage);if(t)return j(t);if(oe("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return L}return ei(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>L),M(t=>Object.keys(t).length>0),m(t=>({facts:t})),J(1)))}function ti(e){let t=W(":scope > :last-child",e);return H(()=>{let r=new x;return r.subscribe(({facts:o})=>{t.appendChild(un(o)),t.classList.add("md-source__repository--active")}),Wa(e).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}function Ua(e,{viewport$:t,header$:r}){return ye(document.body).pipe(E(()=>ar(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),ee("hidden"))}function ri(e,t){return H(()=>{let r=new x;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(te("navigation.tabs.sticky")?j({hidden:!1}):Ua(e,t)).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}function Na(e,{viewport$:t,header$:r}){let o=new Map,n=q("[href^=\\#]",e);for(let a of n){let c=decodeURIComponent(a.hash.substring(1)),p=ce(`[id="${c}"]`);typeof p!="undefined"&&o.set(a,p)}let i=r.pipe(ee("height"),m(({height:a})=>{let c=Ee("main"),p=W(":scope > :first-child",c);return a+.8*(p.offsetTop-c.offsetTop)}),le());return ye(document.body).pipe(ee("height"),E(a=>H(()=>{let c=[];return j([...o].reduce((p,[l,f])=>{for(;c.length&&o.get(c[c.length-1]).tagName>=f.tagName;)c.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return p.set([...c=[...c,l]].reverse(),u)},new Map))}).pipe(m(c=>new Map([...c].sort(([,p],[,l])=>p-l))),Ge(i),E(([c,p])=>t.pipe(kr(([l,f],{offset:{y:u},size:d})=>{let v=u+d.height>=Math.floor(a.height);for(;f.length;){let[,b]=f[0];if(b-p=u&&!v)f=[l.pop(),...f];else break}return[l,f]},[[],[...c]]),X((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([a,c])=>({prev:a.map(([p])=>p),next:c.map(([p])=>p)})),V({prev:[],next:[]}),Le(2,1),m(([a,c])=>a.prev.length{let i=new x,s=i.pipe(Z(),re(!0));if(i.subscribe(({prev:a,next:c})=>{for(let[p]of c)p.classList.remove("md-nav__link--passed"),p.classList.remove("md-nav__link--active");for(let[p,[l]]of a.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",p===a.length-1)}),te("toc.follow")){let a=_(t.pipe(ke(1),m(()=>{})),t.pipe(ke(250),m(()=>"smooth")));i.pipe(M(({prev:c})=>c.length>0),Ge(o.pipe(Se(ae))),ne(a)).subscribe(([[{prev:c}],p])=>{let[l]=c[c.length-1];if(l.offsetHeight){let f=zo(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=he(f);f.scrollTo({top:u-d/2,behavior:p})}}})}return te("navigation.tracking")&&t.pipe(Y(s),ee("offset"),ke(250),je(1),Y(n.pipe(je(1))),Tt({delay:250}),ne(i)).subscribe(([,{prev:a}])=>{let c=pe(),p=a[a.length-1];if(p&&p.length){let[l]=p,{hash:f}=new URL(l.href);c.hash!==f&&(c.hash=f,history.replaceState({},"",`${c}`))}else c.hash="",history.replaceState({},"",`${c}`)}),Na(e,{viewport$:t,header$:r}).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))})}function Da(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:s}})=>s),Le(2,1),m(([s,a])=>s>a&&a>0),X()),i=r.pipe(m(({active:s})=>s));return B([i,n]).pipe(m(([s,a])=>!(s&&a)),X(),Y(o.pipe(je(1))),re(!0),Tt({delay:250}),m(s=>({hidden:s})))}function ni(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new x,s=i.pipe(Z(),re(!0));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(Y(s),ee("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),h(e,"click").subscribe(a=>{a.preventDefault(),window.scrollTo({top:0})}),Da(e,{viewport$:t,main$:o,target$:n}).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))}function ii({document$:e,tablet$:t}){e.pipe(E(()=>q(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),se(r=>h(r,"change").pipe(Rr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ne(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function Va(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function ai({document$:e}){e.pipe(E(()=>q("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),M(Va),se(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function si({viewport$:e,tablet$:t}){B([We("search"),t]).pipe(m(([r,o])=>r&&!o),E(r=>j(r).pipe(ze(r?400:100))),ne(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function za(){return location.protocol==="file:"?ht(`${new URL("search/search_index.js",Qr.base)}`).pipe(m(()=>__index),J(1)):Ue(new URL("search/search_index.json",Qr.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var nt=Uo(),_t=Bo(),gt=Jo(_t),Yr=Yo(),Te=nn(),lr=Fr("(min-width: 960px)"),pi=Fr("(min-width: 1220px)"),li=Xo(),Qr=me(),mi=document.forms.namedItem("search")?za():Ve,Br=new x;Fn({alert$:Br});var Gr=new x;te("navigation.instant")&&Wn({location$:_t,viewport$:Te,progress$:Gr}).subscribe(nt);var ci;((ci=Qr.version)==null?void 0:ci.provider)==="mike"&&qn({document$:nt});_(_t,gt).pipe(ze(125)).subscribe(()=>{Ke("drawer",!1),Ke("search",!1)});Yr.pipe(M(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=ce("link[rel=prev]");typeof t!="undefined"&&ot(t);break;case"n":case".":let r=ce("link[rel=next]");typeof r!="undefined"&&ot(r);break;case"Enter":let o=Re();o instanceof HTMLLabelElement&&o.click()}});ii({document$:nt,tablet$:lr});ai({document$:nt});si({viewport$:Te,tablet$:lr});var Xe=kn(Ee("header"),{viewport$:Te}),Lt=nt.pipe(m(()=>Ee("main")),E(e=>Rn(e,{viewport$:Te,header$:Xe})),J(1)),qa=_(...oe("consent").map(e=>cn(e,{target$:gt})),...oe("dialog").map(e=>Cn(e,{alert$:Br})),...oe("header").map(e=>Hn(e,{viewport$:Te,header$:Xe,main$:Lt})),...oe("palette").map(e=>Pn(e)),...oe("progress").map(e=>In(e,{progress$:Gr})),...oe("search").map(e=>Gn(e,{index$:mi,keyboard$:Yr})),...oe("source").map(e=>ti(e))),Ka=H(()=>_(...oe("announce").map(e=>sn(e)),...oe("content").map(e=>An(e,{viewport$:Te,target$:gt,print$:li})),...oe("content").map(e=>te("search.highlight")?Jn(e,{index$:mi,location$:_t}):L),...oe("header-title").map(e=>$n(e,{viewport$:Te,header$:Xe})),...oe("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?jr(pi,()=>Kr(e,{viewport$:Te,header$:Xe,main$:Lt})):jr(lr,()=>Kr(e,{viewport$:Te,header$:Xe,main$:Lt}))),...oe("tabs").map(e=>ri(e,{viewport$:Te,header$:Xe})),...oe("toc").map(e=>oi(e,{viewport$:Te,header$:Xe,main$:Lt,target$:gt})),...oe("top").map(e=>ni(e,{viewport$:Te,header$:Xe,main$:Lt,target$:gt})))),fi=nt.pipe(E(()=>Ka),qe(qa),J(1));fi.subscribe();window.document$=nt;window.location$=_t;window.target$=gt;window.keyboard$=Yr;window.viewport$=Te;window.tablet$=lr;window.screen$=pi;window.print$=li;window.alert$=Br;window.progress$=Gr;window.component$=fi;})();
+//# sourceMappingURL=bundle.aecac24b.min.js.map
+
diff --git a/assets/javascripts/bundle.aecac24b.min.js.map b/assets/javascripts/bundle.aecac24b.min.js.map
new file mode 100644
index 000000000..b1534de53
--- /dev/null
+++ b/assets/javascripts/bundle.aecac24b.min.js.map
@@ -0,0 +1,7 @@
+{
+ "version": 3,
+ "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/rxjs/node_modules/tslib/tslib.es6.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"],
+ "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2023 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n}) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n});\r\n\r\nexport function __exportStar(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n}\r\n\r\nexport function __spreadArray(to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n}\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nvar __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n}) : function(o, v) {\r\n o[\"default\"] = v;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n}\r\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening. \n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe (op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction